rajabmondal commited on
Commit
9cc2bbe
·
verified ·
1 Parent(s): 0263807

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -165
README.md CHANGED
@@ -281,168 +281,3 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
281
 
282
  <!-- footer start -->
283
  <!-- 200823 -->
284
- ## Discord
285
-
286
- For further support, and discussions on these models and AI in general, join us at:
287
-
288
- [TheBloke AI's Discord server](https://discord.gg/theblokeai)
289
-
290
- ## Thanks, and how to contribute
291
-
292
- Thanks to the [chirper.ai](https://chirper.ai) team!
293
-
294
- Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
295
-
296
- I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
297
-
298
- If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
299
-
300
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
301
-
302
- * Patreon: https://patreon.com/TheBlokeAI
303
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
304
-
305
- **Special thanks to**: Aemon Algiz.
306
-
307
- **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
308
-
309
-
310
- Thank you to all my generous patrons and donaters!
311
-
312
- And thank you again to a16z for their generous grant.
313
-
314
- <!-- footer end -->
315
-
316
- <!-- original-model-card start -->
317
- # Original model card: Mistral AI_'s Mixtral 8X7B Instruct v0.1
318
-
319
- # Model Card for Mixtral-8x7B
320
- The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
321
-
322
- For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
323
-
324
- ## Warning
325
- This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
326
-
327
- ## Instruction format
328
-
329
- This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
330
-
331
- The template used to build a prompt for the Instruct model is defined as follows:
332
- ```
333
- <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
334
- ```
335
- Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
336
-
337
- As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
338
- ```python
339
- def tokenize(text):
340
- return tok.encode(text, add_special_tokens=False)
341
-
342
- [BOS_ID] +
343
- tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
344
- tokenize(BOT_MESSAGE_1) + [EOS_ID] +
345
-
346
- tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
347
- tokenize(BOT_MESSAGE_N) + [EOS_ID]
348
- ```
349
-
350
- In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
351
-
352
- ## Run the model
353
-
354
- ```python
355
- from transformers import AutoModelForCausalLM, AutoTokenizer
356
-
357
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
358
- tokenizer = AutoTokenizer.from_pretrained(model_id)
359
-
360
- model = AutoModelForCausalLM.from_pretrained(model_id)
361
-
362
- text = "Hello my name is"
363
- inputs = tokenizer(text, return_tensors="pt")
364
-
365
- outputs = model.generate(**inputs, max_new_tokens=20)
366
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
367
- ```
368
-
369
- By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
370
-
371
- ### In half-precision
372
-
373
- Note `float16` precision only works on GPU devices
374
-
375
- <details>
376
- <summary> Click to expand </summary>
377
-
378
- ```diff
379
- + import torch
380
- from transformers import AutoModelForCausalLM, AutoTokenizer
381
-
382
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
383
- tokenizer = AutoTokenizer.from_pretrained(model_id)
384
-
385
- + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
386
-
387
- text = "Hello my name is"
388
- + inputs = tokenizer(text, return_tensors="pt").to(0)
389
-
390
- outputs = model.generate(**inputs, max_new_tokens=20)
391
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
392
- ```
393
- </details>
394
-
395
- ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
396
-
397
- <details>
398
- <summary> Click to expand </summary>
399
-
400
- ```diff
401
- + import torch
402
- from transformers import AutoModelForCausalLM, AutoTokenizer
403
-
404
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
405
- tokenizer = AutoTokenizer.from_pretrained(model_id)
406
-
407
- + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
408
-
409
- text = "Hello my name is"
410
- + inputs = tokenizer(text, return_tensors="pt").to(0)
411
-
412
- outputs = model.generate(**inputs, max_new_tokens=20)
413
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
414
- ```
415
- </details>
416
-
417
- ### Load the model with Flash Attention 2
418
-
419
- <details>
420
- <summary> Click to expand </summary>
421
-
422
- ```diff
423
- + import torch
424
- from transformers import AutoModelForCausalLM, AutoTokenizer
425
-
426
- model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
427
- tokenizer = AutoTokenizer.from_pretrained(model_id)
428
-
429
- + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
430
-
431
- text = "Hello my name is"
432
- + inputs = tokenizer(text, return_tensors="pt").to(0)
433
-
434
- outputs = model.generate(**inputs, max_new_tokens=20)
435
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
436
- ```
437
- </details>
438
-
439
- ## Limitations
440
-
441
- The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
442
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
443
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
444
-
445
- # The Mistral AI Team
446
- Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
447
-
448
- <!-- original-model-card end -->
 
281
 
282
  <!-- footer start -->
283
  <!-- 200823 -->