LLM

Good speech-to-text models have this trait, along with language translation programs and on-screen swipe keyboards. In each of these cases we want to be understood, not surprised. AI, therefore, makes the most sense as a translation layer between humans, who are incurably chaotic, and traditional software, which is deterministic.

an adaptive interface between chaotic real-world problems and secure, well-architected technical solutions. AI may not truly understand us, but it can deliver our intentions to an API with reasonable accuracy and describe the results in a way we understand.

from https://stackoverflow.blog/2023/05/01/ai-isnt-the-app-its-the-ui/

Web UIs for LLMs Link to heading

NameWeb BrowseWeb SearchVoiceImage GenerationMobile UIStatic siteDepends onLanguage
Lobe ChatPluginsPluginsEnglish
big-AGIEnglish
NextChatChinese
AnseEnglish
Chatbot UISupabaseEnglish
Better ChatGPTEnglish
SlickGPTFirebaseEnglish

ChatGPT Link to heading

Code generation Link to heading

Copilot Link to heading

Comparison Link to heading

Datasets Link to heading

Hosting Link to heading

Cloud Link to heading

  • https://www.cerebrium.ai/ - makes it easier to train, deploy and monitor machine learning models with just a few lines of code - Serverless GPU Model Deployment
  • https://cohere.ai/ - build high performance, secure LLM for the enterprise - powerful capabilities, like content generation, summarization, and search
  • CoreWeave - cloud provider, delivering a massive scale of GPUs.
  • Foundry - Instant Compute ML infra.
  • Lambda - access to GPUs for deep learning.
  • Modal - Run generative AI models, large-scale batch jobs, job queues, and much more.
  • Monster API - access powerful generative AI models with our auto-scaling APIs, zero management required.
  • Replicate - Run models in the cloud at scale.

Decentralized Link to heading

  • GPUtopia - GPU Marketplace
  • Petals - Run large language models at home, BitTorrent‑style - Repo

Local Link to heading

Image Generation Link to heading

Models Link to heading

MPT Link to heading

Low-code platforms Link to heading

  • Rivet - The Open-Source Visual AI Programming Environment #OpenSource
  • Stack AI - The No-Code AI Automation Platform
  • Vellum - The dev platform for production LLM apps

RAG Link to heading

  • ColBERT - fast and accurate retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.

SDKs Link to heading

  • Guardrails - lets a user add structure, type and quality guarantees to the outputs of LLMs.
  • Guidance - Python lib that allows you to interleave generation, prompting, and logical control into a single continuous flow matching #OpenSource
  • Kor - Python lib that “helps” you extract structured data from text using LLMs #OpenSource
  • LangChain - Python lib to develop AI applications - PromptTemplate, LLMs interface, etc. #OpenSource
    • Agents use an LLM to determine which actions to take and in what order.
    • Pros
    • BabyAGI #Pinecone
    • Langflow - UI designed with react-flow to provide an effortless way to experiment and prototype flows.
  • LiteLLM - Call 100+ LLMs using the same Input/Output format #OpenSource
  • LlamaIndex - framework for RAG #OpenSource
  • LLM - A CLI utility and Python lib for interacting with OpenAI, PaLM and local models installed on your own machine #OpenSource
  • MonkeyPatch - easily call an LLM in place of the function body in Python. The more you use MonkeyPatch functions, the cheaper and faster they gets (up to 9-10x!) through automatic model distillation. #OpenSource
  • Prompt Engine - Javascript lib for creating and maintaining prompts #OpenSource
  • Semantic Kernel - Python and C# libs that allow to define plugins that can be chained together #OpenSource

Speech Recognition Link to heading