bis, von |
Thema |
2025-06-22, 2024-12-18 |
Large Language Models - what they can do, how they work, and how to run them | ##llm-bots | bots: llamar, fine tune mistral-7b.Q6_K + llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; electrabot, meta-llama-3-70b-instruct-ima | tokenizer, embeddings, positional encoding, attention matrices, logits, back-propagation, inference |
2024-12-17, 2024-04-25 |
Large Language Models - what they can do, how they work, and how to run them | ##llm-bots | bots: llamar, fine tune mistral-7b.Q6_K + llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0; electrabot, meta-llama-3-70b-instruct-ima | tokenizer, embeddings, positional encoding, attention matrices, logits, back-propagation, inference |
2024-04-24, 2024-02-29 |
Large Language Models - what they can do, how they work, and how to run them | ##llm-bots | bots: llamar, fine tune mistral-7b.Q6_K + llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, logits, back-propagation, inference |
2024-02-28 |
Large Language Models - what they can do, how they work, and how to run them also ##llm-bots | bot models: llamar, collectivecognition-v1.1-mistral-7b.Q6_K and llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, infere |
2024-02-27, 2024-01-24 |
Large Language Models - what they can do, how they work, and how to run them | also ##llm-bots | bot models: llamar, collectivecognition-v1.1-mistral-7b.Q6_K and llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, infere |
2024-01-23, 2023-12-21 |
Large Language Models - what they can do, how they work, and how to run them | bot models: llamar, collectivecognition-v1.1-mistral-7b.Q6_K and llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, inference |
2023-12-20, 2023-10-18 |
Large Language Models - what they can do, how they work, and how to run them | bot models: llamar, collectivecognition-v1.1-mistral-7b and llava-v1.5-7b; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, inference |
2023-10-17 |
Large Language Models - what they can do, how they work, and how to run them | bot models: llamar, collectivecognition-v1.1-mistral-7b; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, inference |
2023-10-16, 2023-10-04 |
Large Language Models - what they can do, how they work, and how to run them | bot models: llamar, Wizard-Vicuna-13B llama1; tinsoldier, OpenAI gpt-3.5-turbo; medbot, medalpaca 7B; tinyllama, tinyllama-1.1b-chat-v0.3 q4_0 | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, inference |
2023-10-03, 2023-10-01 |
Large Language Models - what they can do, how they work, and how to run them | bot models: llamar, Wizard-Vicuna-13B llama1; tinsoldier, OpenAI gpt-3.5-turbo; medbot, Open Assistant(?) | tokenizer, embeddings, positional encoding, attention matrices, masked attention, logits, back-propagation, inference |