Category: Local
-
gemma 2 27b
Google’s ! official website They are text-to-text, decoder-only large language models (shares some tech with google’s gemini) Hugging face
-
Qwen 2.5 32B
Much larger than the Qwen 2.5 coder, But much better outcomes, Harder to run (32B) but worth it (You can run a quantized version) Again, you need a 3090+ or something like that to run this 2024-11-21: Qwen 2.5 Turbo Just Released, raising the number of input tokens from 250K to 1M !! You can…
-
yi coder 9B chat
9 Billion parameters 128K context 52 programming languages Hugging Face
-
Qwen 2.5 coder 7B instruct
FKA CodeQwen, and Qwen 2B‘s little brother 128K context 8 Billion parameters (model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters) Apache License Hugging Face
-
Codestral 22B
By Mistral AI under the Mistral AI Non-Production License 32k context window 22 Billion parameters 80 programming languages through fill in the middle on hugging face
-
Deep Live Cam
This is probably one of the most dangerous AI tools I have ever come across What this does is essentially turn you into someone else LIVE ! zoom, facetime, etc.. It has been released to Github (here), and is also easy to setup ! According to news from all over the internet, A similar technology…
-
aider.chat
Understands your existing code, and is good at understanding the context of your files, edits your local repository Aider works best with GPT-4o & Claude 3.5 Sonnet and can connect to almost any LLM. Website => Github =>
-
OpenUI
Cool tool to create website front ends ! Github => Demo => Works with API keys of online services such as OpenAI, Groq, Gemini, Anthropic, Cohere, Mistral, OPENAI_COMPATIBLE_ENDPOINT, OPENAI_COMPATIBLE_API_KEY Also works with local LLMs such as localai you can set OPENAI_COMPATIBLE_ENDPOINT and optionally OPENAI_COMPATIBLE_API_KEY to have the models available listed in the UI’s model selector…