modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-09 06:27:18
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
418 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-09 06:26:49
card
stringlengths
11
1.01M
GroNLP/T0pp-sharded
GroNLP
"2022-08-29T20:14:25Z"
5
5
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2110.08207", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-08-29T19:59:24Z"
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text: "A is the son's of B's uncle. What is the family relationship between A and B?" - text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old." - text: "Task: copy but say the opposite.\n PSG won its match against Barca." - text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy." example_title: "Sentiment analysis" - text: "Question A: How is air traffic controlled? \nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady. \nIn the previous sentence, decide who 'her' is referring to." example_title: "Coreference resolution" - text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n Select the category for the above sentence from: mobile, website, billing, account access." - text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences 1 and 2 have the same meaning?" example_title: "Paraphrase identification" - text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best." - text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1, LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out. Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?" - text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n Sentence A: you can leave the books on the table over there.\n Sentence B: the tables in this book are very hard to read." - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n Which book is the leftmost book?" example_title: "Logic puzzles" - text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?" example_title: "Reading comprehension" - text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n Which of the following best characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places where people live." --- *This repository provides a sharded version of the T0pp model that can be loaded in low-memory setups.* **Official repositories**: [Github](https://github.com/bigscience-workshop/t-zero) | [Hugging Face Hub](https://huggingface.co/bigscience/T0pp) # Model Description T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks. # Intended uses You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*. A few other examples that you can try: - *A is the son's of B's uncle. What is the family relationship between A and B?* - *Question A: How is air traffic controlled?<br> Question B: How do you become an air traffic controller?<br> Pick one: these questions are duplicates or not duplicates.* - *Is the word 'table' used in the same meaning in the two following sentences?<br><br> Sentence A: you can leave the books on the table over there.<br> Sentence B: the tables in this book are very hard to read.* - *Max: Know any good websites to buy clothes from?<br> Payton: Sure :) LINK 1, LINK 2, LINK 3<br> Max: That's a lot of them!<br> Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br> Max: I'll check them out. Thanks.<br><br> Who or what are Payton and Max referring to when they say 'them'?* - *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br> The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br> Which book is the leftmost book?* - *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.* # How to use We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[T0](https://huggingface.co/bigscience/T0)|11 billion| |[T0p](https://huggingface.co/bigscience/T0p)|11 billion| |[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion| |[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion| |[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion| |[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion| Here is how to use the model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp") inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`. **Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.** # Training procedure T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective. At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section. Training details: - Fine-tuning steps: 12'200 - Input sequence length: 1024 - Target sequence length: 256 - Batch size: 1'024 sequences - Optimizer: Adafactor - Learning rate: 1e-3 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples) - Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP| |T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions| |T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC| |T0_single_prompt|Same as T0 but only one prompt per training dataset| |T0_original_task_only|Same as T0 but only original tasks templates| |T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model| For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page. *: We recast Hotpot QA as closed-book QA due to long input sequence length. # Evaluation data We evaluate our models on a suite of held-out tasks: |Task category|Datasets| |-|-| |Natural language inference|ANLI, CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Limitations - The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html). - We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model. - Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text. # Bias and fairness Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics: - Input: `Is the earth flat?` - Prediction: `yes` - Input: `Do vaccines cause autism?` - Prediction: `yes` - Input: `Complete this sentence: This man works as a` - Prediction: `Architect` - Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny` - Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex` - Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault` - Input: `what is something everyone hates, but you like?` - Prediction: `sex` - Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex` - Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut` - Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy` Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases. To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts. <table> <tr> <td>Dataset</td> <td>Model</td> <td>Average (Acc.)</td> <td>Median (Acc.)</td> </tr> <tr> <td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td> </tr> <td>T0p</td><td>57.6</td><td>83.8</td> <tr> </tr> <td>T0pp</td><td>62.7</td><td>64.4</td> <tr> </tr> <td>T0_single_prompt</td><td>57.6</td><td>69.5</td> <tr> </tr> <td>T0_original_task_only</td><td>47.1</td><td>37.8</td> <tr> </tr> <td>T0_3B</td><td>56.9</td><td>82.6</td> </tr> <tr> <td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td> </tr> <td>T0p</td><td>80.1</td><td>80.6</td> <tr> </tr> <td>T0pp</td><td>89.2</td><td>90.0</td> <tr> </tr> <td>T0_single_prompt</td><td>81.6</td><td>84.6</td> <tr> </tr> <td>T0_original_task_only</td><td>83.7</td><td>83.8</td> <tr> </tr> <td>T0_3B</td><td>69.7</td><td>69.4</td> </tr> </table> To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts. <table> <tr> <td rowspan="2">Model</td> <td rowspan="2">Subset</td> <td colspan="3">Average (Acc.)</td> <td colspan="3">Median (Acc.)</td> </tr> <tr> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> </tr> <tr> <td rowspan="2">T0</td><td>Type 1</td> <td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td> </tr> <td>Type 2</td> <td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0p</td> <td>Type 1</td> <td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td> </tr> </tr> <td rowspan="2">T0pp</td> <td>Type 1</td> <td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td> </tr> </tr> <td>Type 2</td> <td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td> </tr> </tr> <td rowspan="2">T0_single_prompt</td> <td>Type 1</td> <td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td> </tr> </tr> <td rowspan="2">T0_original_task_only</td> <td>Type 1</td> <td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td> </tr> </tr> <td> Type 2</td> <td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0_3B</td> <td>Type 1</td> <td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td> </tr> </tr> <td> Type 2</td> <td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td> </tr> </table> # BibTeX entry and citation info ```bibtex @misc{sanh2021multitask, title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush}, year={2021}, eprint={2110.08207}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
HeWhoRemixes/anything-v4.5-pruned-fp16
HeWhoRemixes
"2024-04-22T05:49:22Z"
0
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-05-15T08:58:18Z"
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- ## Note I do not own this model nor did I train it. ## Sources - [Model](https://huggingface.co/andite/anything-v4.0) - [VAE](https://huggingface.co/hakurei/waifu-diffusion)
ArchiveAI/Thespis-CurtainCall-7b-v0.2.1
ArchiveAI
"2024-02-28T19:17:27Z"
1
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-28T19:17:27Z"
--- license: cc-by-nc-4.0 --- ## Outdated, please use https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.2 This model is the first in a series of experiments to make my models a bit smarter. Its nowhere near done, but my initial testing was good so I'm uploading so people can check it out. Datasets Used: * Dolphin * Ultrachat * Capybara * Augmental * ToxicQA * Magiccoder-Evol-Instruct-110k * Yahoo Answers * OpenOrca * Airoboros 3.1 * grimulkan/physical-reasoning and theory-of-mind ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.03) ## Recommended Kobold Horde Preset -> MinP
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e0_s55555_v4_l4_v100
KingKazma
"2023-08-13T20:05:30Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-08-13T20:05:29Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
Dreaver/Reinforce-PixelCopter
Dreaver
"2023-11-25T16:06:08Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-11-25T16:05:18Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 32.10 +/- 22.34 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
HKAIR-Lab/HK-O1aw
HKAIR-Lab
"2024-11-06T02:47:11Z"
14
6
null
[ "pytorch", "llama", "text-generation-inference", "text2text-generation", "zh", "dataset:HKAIR-Lab/HK-O1aw-SFT-16K", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:apache-2.0", "region:us" ]
text2text-generation
"2024-10-30T12:36:33Z"
--- license: apache-2.0 datasets: - HKAIR-Lab/HK-O1aw-SFT-16K language: - zh base_model: - meta-llama/Llama-3.1-8B pipeline_tag: text2text-generation tags: - text-generation-inference --- # HK-O1aw (v0) HK-O1aw is a legal assistant with <u>O1-style</u> complex reasoning capabilities, trained using the [Align-Anything](https://github.com/PKU-Alignment/align-anything) framework on the [O1aw-Dataset](https://github.com/HKAIR-Lab/HK-O1aw), based on LLaMA-3.1-8B (pre-trained version). HK-O1aw aims to enhance large language models' reasoning capabilities and complex problem-solving abilities in the legal domain (specifically in Hong Kong). More importantly, HK-O1aw has open-sourced all training data, code, and prompts used for synthetic data generation, providing greater convenience for the community and researchers. - **Developed by:** the [HKAIR-Lab](https://github.com/HKAIR-Lab) Team. - **Fine-tuned from model:** [LLaMA-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). ## Model Sources - **Repository and Dataset:** <https://github.com/HKAIR-Lab/HK-O1aw> ## Example <table> <thead> <tr> <th>Model Cases</th> </tr> </thead> <tbody> <tr> <td><details><summary>Case 1</summary> **Prompt:**<br> 一般来说,医生是否可以在未取得病人同意的情况下为病人施予治疗或展开医疗程序?<br> **Output:**<br> **分析法律背景**<br> 根据香港的法律,医生通常需要病人的同意才能进行治疗或医疗程序。然而,有例外情况允许在紧急情况下不经同意即可进行治疗,以拯救病人的生命。<br> **理解紧急情况的例外**<br> 在紧急情况下,如果病人无法给予同意且延误治疗可能危及生命,医生可以在未取得同意的情况下采取行动。这是为了确保病人的生命安全。<br> **考虑其他可能性**<br> 除了紧急情况,医生通常需要病人或其家属的同意才能进行治疗。没有其他明确的法律条款允许在非紧急情况下不经同意进行治疗。 通过分析,我们确认在非紧急情况下,医生通常需要病人或其家属的同意才能进行治疗。只有在紧急情况下,为了拯救生命而无需获得同意。 </details> </td> </tr> <tr> <td><details><summary>Case 2</summary> **Prompt**<br> 业主和租客在没有书面租约下出租和租用物业会涉及甚么潜在的风险?<br> **Output:**<br> **分析没有书面租约的风险** <br>在没有书面租约的情况下,业主和租客可能面临多种风险。首先,双方可能对租赁条款产生分歧,导致法律纠纷。其次,缺乏书面记录可能使得租客在物业管理或维修问题上处于不利地位。此外,如果物业被没收或政府收回土地,租客可能无法获得任何补偿。 **方法一:法律纠纷的可能性** 如果没有书面协议,双方在出租和使用细节上可能会有不同的理解。这种情况下,如果发生争议,法律上很难证明谁是谁非。这意味着如果一方认为另一方违反了协议,他们可能无法通过法律途径获得补偿或解决。 **方法二:物业管理和维修问题** 在没有书面协议的情况下,租客通常无法证明他们支付了费用,因此很难要求业主进行必要的维修或管理。这也意味着如果有任何损坏或需要维护的问题,双方之间的沟通和协调变得更加复杂。 **方法三:政府收回土地的风险** 如果物业被政府没收或收回土地,没有书面协议将使得租客无法获得任何补偿。因为法律上没有明确的合同条款来支持他们的地位,他们可能会失去已付出的所有投资。 通过不同的角度分析,我们可以确认没有书面协议存在多种潜在风险。这些包括法律纠纷、物业管理问题以及政府收回土地的风险。每种方法都从不同方面验证了这些风险,并且都指向相同的结论,即缺乏书面记录会导致严重后果。 </details> </td> </tr> <tr> <td><details><summary>Case 3</summary> **Prompt:** <br> 我很少参与高危运动(例如滑水),但我有一次玩这类运动时受伤。这次受伤会被视为「意外」、「自招」抑或「疏忽」事件呢?它会否影响我的医疗或意外保险索偿? **Output:**<br> **分析受伤事件的性质** 首先,我们需要确定受伤事件的性质。根据定义,意外是指非故意、不可预见和非惯常的事件。自招是指因过度自信或疏忽导致的事故,而疏忽则是指未能采取合理谨慎措施。 **评估滑水作为高危运动** 滑水被归类为高危运动,这意味着它本身具有较高的风险性。我们需要考虑在这种情况下,受伤是否符合意外的定义,即不可预见和非故意。 **检查保险条款中的例外情况** 保险合同通常会列出某些例外情况,例如参与高危运动时发生的事故可能不被涵盖。这需要仔细检查保险合同中的具体条款。 **考虑医疗和意外保险索偿影响** 如果受伤被视为意外而非自招或疏忽,通常会影响保险索偿。医疗保险可能涵盖部分费用,而意外保险则可能提供更全面的赔偿。 通过多种方法分析,我们可以得出结论:如果受伤事件符合意外的定义,并且不在保险合同中的例外情况内,则可能不会影响索偿。但如果被视为自招或疏忽,则可能会影响索偿。 </details> </td> </tr> <tr> <td><details><summary>Case 4</summary> **Prompt:** <br> 草拟一份优秀的状书的基本原则是甚么? **Output:**<br> **分析状书的基本要求** 状书是法律程序中的重要文件,用于正式提出法律请求或申索。其目的是清楚地陈述申索的基础和相关事实。因此,草拟状书时需要明确、具体和准确地描述事件的经过。 **方法一:从法律程序角度分析** 从法律程序的角度来看,状书必须遵循特定的格式和内容要求。这包括提供详细的事实支持、明确列出申索理由以及提供相关证据的概要。这种方法强调对法律程序的遵循。 **方法二:从沟通效率角度分析** 从沟通效率的角度来看,状书应清晰易读,以便法庭和各方能够迅速理解案件背景和申索内容。这意味着使用简单明了的语言,并分段组织信息,以便快速查阅。 **方法三:从策略性角度分析** 从策略性角度来看,草拟者应考虑如何最有效地支持自己的立场。通过选择合适的事实和证据,可以最大化影响法庭判决。此外,了解对方可能提出的抗辩,并在状书中准备相应回应。 通过多种方法验证后,最终确认优秀状书需具备清晰、准确、全面、易读及策略性的特征。这些特征共同确保法庭能够有效理解案件背景并作出公正判决。 </details> </td> </tr> <tr> <td><details><summary>Case 5</summary> **Prompt:** <br> 商业为什么需要调解和仲裁? **Output:** <br> **分析商业调解的必要性** 商业调解在国际贸易中起着重要作用,因为它提供了一种非对抗性的解决争议的方式。通过调解,双方可以在保密的环境下讨论问题,避免公开的法律程序,这对于维护商业关系至关重要。 **探讨仲裁的优势** 仲裁是一种更正式的纠纷解决方式,通常比法院诉讼更快、更私密,并且仲裁判决具有法律约束力。由于国际贸易中可能涉及不同司法管辖区,仲裁提供了一个统一的法律框架来解决争议。 **比较调解和仲裁** 调解和仲裁都是替代性争议解决机制,但它们有不同的特点。调解是自愿且灵活的,而仲裁则具有法律约束力。商业需要这两种机制,以满足不同的需求:调解用于快速解决小额纠纷,而仲裁用于复杂或高价值交易。 **验证和排除其他可能性** 考虑到其他可能的争议解决方式,如直接协商或诉讼,发现这些方法各有局限性。直接协商可能不具备强制执行力,而诉讼过程复杂且耗时。此外,国际贸易中的不同法律体系使得法院诉讼变得复杂。 通过重新审视各个步骤,确认调解和仲裁在国际贸易中的独特作用。它们提供了灵活性、保密性和法律保障,使得商业能够有效地管理风险并维护关系。 </details> </td> </tr> </tbody> </table> ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "HKAIR-Lab/HK-O1aw" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) user_prompt: str = '<|reserved_special_token_0|>{input}<|reserved_special_token_1|>\n' infer_template = user_prompt + '<|reserved_special_token_2|>\n**' prompt = "在物业买卖过程中,业主是否有义务将租赁详情完全披露给准买家?如果未能完全披露,可能会产生哪些法律后果?" text = infer_template.format(input=prompt) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Citation Please cite our work if you use the data or model in your paper. ```bash @misc{HK-O1aw, author = {HKAIR Lab}, title = {HK-O1aw Models: Leveraging O1 Slow Thinking in the Development of Hong Kong Legal Large Language Models}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/HKAIR-Lab/HK-O1aw}}, } ``` ## License HK-O1aw is released under Apache License 2.0, and you also need to agree with LLAMA 3.1 COMMUNITY LICENSE.
moot20/Qwen2.5-Coder-3B-Instruct-MLX-6bits
moot20
"2025-02-01T18:08:55Z"
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "code", "codeqwen", "chat", "qwen", "qwen-coder", "mlx", "conversational", "en", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "region:us" ]
text-generation
"2025-02-01T18:05:12Z"
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-3B-Instruct pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder - mlx --- # moot20/Qwen2.5-Coder-3B-Instruct-MLX-6bits The Model [moot20/Qwen2.5-Coder-3B-Instruct-MLX-6bits](https://huggingface.co/moot20/Qwen2.5-Coder-3B-Instruct-MLX-6bits) was converted to MLX format from [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using mlx-lm version **0.21.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("moot20/Qwen2.5-Coder-3B-Instruct-MLX-6bits") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
essamdeeplearning/NewModel
essamdeeplearning
"2024-04-19T22:09:12Z"
176
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-19T22:08:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
danlund4/cartpole
danlund4
"2024-03-14T15:09:56Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-03-14T15:09:33Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
exala/db_mc2_15.1
exala
"2025-02-26T15:13:52Z"
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-02-26T15:13:39Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF
tensorblock
"2024-11-16T00:43:08Z"
71
0
transformers
[ "transformers", "gguf", "TensorBlock", "GGUF", "base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3", "base_model:quantized:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-07T23:59:44Z"
--- library_name: transformers license: llama3 tags: - TensorBlock - GGUF base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 - GGUF This repo contains GGUF format model files for [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q2_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q6_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss | | [Meta-Llama-3-8B-Instruct-abliterated-v3-Q8_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF --include "Meta-Llama-3-8B-Instruct-abliterated-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
mini1013/master_cate_lh2
mini1013
"2024-11-20T14:46:23Z"
292
0
setfit
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
text-classification
"2024-11-20T14:46:01Z"
--- base_model: mini1013/master_domain library_name: setfit metrics: - metric pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 전기 스팀해빙기 수도 배관 동파방지 고온 공구 스팀 고성능 고압 2500W 디지털 7점 세트 2500W 산업용 온도조절 7종 세트+수납함 하니빌리지 - text: 스텐 나사못 목재 피스 목공 철판 나사 직결 와샤머리 4-13(25개) 11. 스텐 트라스머리 볼트_M5-40 (5개) 리더화스너 - text: 안전봉투 택배 포장 뽁뽁이 0호 100X100+40 10매 소량 주황 [비접착] 투명 에어캡 봉투 - 0.2T_18호 250x350 10매 주식회사 이고다(IGODA CO. ,Ltd.) - text: 토네이도 다이아몬드 융착코어비트 폴리싱 대리석 천공 TQ5 57_TTC 17 주식회사 투엑스 - text: 킹토니 핸드소켓 복스알 233504M 2. 롱핸드소켓(육각)_2-21 323513M 3/8x13mm 제로나인 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: metric value: 0.6113686482182797 name: Metric --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 19 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | <ul><li>'장인들의도구들 오토피드 동관커터기 4분의1 (빨강) 엘디에스컴퍼니(LDS Company)'</li><li>'농막 물탱크 대용량 플라스틱 우수 물저장 배럴 탱크 25kg (내부 커버 포함) 미들페이지'</li><li>'인서트 비트 세트, 9개 에스엠샵'</li></ul> | | 18.0 | <ul><li>'[다이소박스] 오픈형 이사박스 단프라 이삿짐 플라스틱 이사용 15. (고급형) 중형박스_(고급형) 중형 청색 주식회사 아성솔루션'</li><li>'PE 롤비닐 통비닐 0.03mm 12cm x 457m 0.1mm_선택29. (폭100cm x 길이91m) 주식회사 제일재단'</li><li>'통샵 공단보자기 45cm 10장묶음 65×65 -10장 묶음_연노랑 통샵'</li></ul> | | 5.0 | <ul><li>'마끼다 충전직소 DJV184Z 본체 18V 속도조절 브러쉬리스 충전직쏘 디유니마켓'</li><li>'[UDT] 에어 해머 함마 치즐4개 포함 UD-1042 선택3. 에어해머 (UD-1043) 주식회사 마이크로엠알오'</li><li>'탈포기 기포제거 레진 공예 실리콘 몰드 진공 화장품 19L 지름30Cm 높이 30Cm 토마톡'</li></ul> | | 4.0 | <ul><li>'침대 난간 어르신 환자 기상 보조 프레임 A. 탄소강 좁은 팔걸이 시엘로'</li><li>'청소중 주차금지 입간판 A형표지판 공사중 미끄럼주의 주차금지 사위탁'</li><li>'오토스 고글 투명 보안경 618A 넓은 시야확보 긁힘방지 무게 48.4g 가벼운 보호안경 OTOS 보안경 B-618ASF 지엠트리 (GMTree)'</li></ul> | | 14.0 | <ul><li>'평 와셔 스텐 M6 1개 낱개 판매 제이원커머스'</li><li>'플랜지 후렌지 스텐 후렌치너트 m3 (10개) 5/16인치 (5개) 일진금속'</li><li>'포니 스프링클램프 집게 핸디 요르젠센 조명고정 3201-HT 베세이 XM3 주식회사 종원툴링'</li></ul> | | 8.0 | <ul><li>'프로펠 미니윈치 단상 220V 호박 와이어윈치 PW160 2홀_160kg_PW160 주식회사 다보인터내셔널(DAVOInternational Co.,Ltd.)'</li><li>'디월트 잭 리프트 DWHT83550 만능 지랫대 옮기기 냉장고 가구 소형 핸드 01.디월트잭리프트(실재고X 입고시 출고) 주식회사 동성툴 (dongsungtool)'</li><li>'사다리 접이식 가정용 5단 a형 안전 광폭 작업발판 노랑 NLS05 경량사다리_NLL02 2단 공구 제트(Z)'</li></ul> | | 0.0 | <ul><li>'3M 스폰지 페파 7136270 02601(600~800방) (주)한메소프트'</li><li>'장판 홈파기 수동 칼 나이프 컷터 커팅기 PVC 바닥 단일 상품 나인포리세븐'</li><li>'스마토 롱라쳇렌치 한쪽롱 L 13x17L 스마토 롱라쳇렌치 (한쪽롱)L 13x17L 디와이스토어'</li></ul> | | 6.0 | <ul><li>'그린웍스 무선 전정기 40V GMAX 본체 01. 본체 네이스'</li><li>'D&D PowerDrive BP78 V 벨트 134247 트리니티 에덴 로우'</li><li>'혼다 4행정 예초기 셀러리픽 배낭 행정 벌초기 제초기 세트 배낭형 2 GX35 백팩 18.8KW 대행몰A'</li></ul> | | 12.0 | <ul><li>'디월트 충전원형톱 DCS573N 184mm (18V/20V) 본체만 디월트 충전원형톱 DCS573N 184mm ( 최저가유통몰'</li><li>'디월트 DCG405P2 20V 브러쉬리스 충전 그라인더 (5.0Ahx2) 풀세트 주식회사 부일툴릭스'</li><li>'충전직소 DJV184Z 본체 18V 속도조절 브러쉬리스 충전직쏘 마끼다 리씨유통'</li></ul> | | 11.0 | <ul><li>'주피터 쥬피터 무선 송풍기 낙엽청소기 JUB-18DE 에어 충전식 18V 디월트 배터리호환 이에스툴'</li><li>'타이탄드릴맨 스키비디토일렛 업타티맨 업그레이드 스피커맨 활동타이탄클락킹황금대검-536입자 아이엠117'</li><li>'핫드릴 보석 큐빅 붙이는 펜 방법 건 옷 다림질 포인트 수리 철 헤드 패션 원단 꾸미기 06 핑크 원형헤드 7개인두헤드 없음 삼각 공주&파티'</li></ul> | | 2.0 | <ul><li>'보일러 온수 순환 펌프 모터 윌로 엘지 PH 045M 니더스몰'</li><li>'영주태양광 3k주택용 지와이(GY)산업'</li><li>'워터펌프 물펌프 다이아프램 소형 미니 수중 고열 고온 DC 12V 30W 04 워터펌프(고급형-너트) 80W 12V 주식회사 세인티에프'</li></ul> | | 15.0 | <ul><li>'휴대용 수동 카운터기 인원체크 운동량 체크 계수기 강성녕'</li><li>'티피링크 TP-LINK Tapo T315 스마트 온습도 센서 디스플레이형 Wi-Fi 허브 이 아이티 커머스'</li><li>'휴대용 디지털 음주측정기 오영이네'</li></ul> | | 16.0 | <ul><li>'노루페인트 슈퍼에나멜 플러스 1L 목재용 철재용 에나멜 유성 페인트 적갈색 유광 특녹색_유광(Gloss) 세계상사'</li><li>'pieke 실내 인테리어용 프리미엄 수성 페인트 피크 페인트(1kg) abyss green (주)해광'</li><li>'[몬타나] 몬타나 골드 그래비티 대용량 다용도 도색용 아크릴 캔스프레이 400ml GOLD 5110_HIMALAYA 대림데칼스토어'</li></ul> | | 3.0 | <ul><li>'무선 인슐린냉장고 휴대 충전 소형 차량 약품 보관함 단일 배터리(대기시간 약 10시간) 유어스토어'</li><li>'지넥스 정밀드라이버 세트 Y자 별나사 육각 Y 시계 안경 삼각 자석 십자 24in1 아이폰 전용수리공구 001339 주식회사 원빈다이어리'</li><li>'디월트 프리미엄 수공구세트 184PCS 코스트코 소켓 라쳇 공구세트 108PCS 시크릿카트'</li></ul> | | 7.0 | <ul><li>'고온 고압 스팀 세척기 세차기 청소기 소독 에어컨 2600W 홈 업그레이드 직접 분사 모델 검정 구루미컴퍼니'</li><li>'크레토스 산소게이지 가스레귤레이터 감압기 조정기 EX-701 LPG-EX-705 알통공구'</li><li>'크레토스 산소게이지 가스레귤레이터 감압기 조정기 EX-701 CO2-EX-702B 220V-22 알통공구'</li></ul> | | 17.0 | <ul><li>'페인트붓 건축도장기능사준비물 납작붓15mm 막붓 빽붓 문자 도형 평붓 유성수성 미술붓 페인트 붓 폭70mm(11호) 비케이(B.K)코리아'</li><li>'인터그레인 울트라데크 팀버스테인 10L 리치 초콜렛 (주)나무와사람들 던에드워드 페인트'</li><li>'수성 금속 방청페인트 속건형 친환경 무취 스테인리스/알루미늄합금 컬러 블루x2_350g 주식회사 플레이산'</li></ul> | | 9.0 | <ul><li>'백마 양손가위 A-3000(180mm) 동아상사'</li><li>'토종벌 박스 꿀수확 벌꿀 야생벌통 밀랍판 채집 5단 건조 5단 격자 상자 1창 왁스 5단 격자 상자 5창 감동산글로벌'</li><li>'파종기 모종 다기능 심기 시금치 기계 종자 심는 무 양배추 파종 6줄(줄간격 7.5~9) 셀러리픽몰'</li></ul> | | 10.0 | <ul><li>'진흥 주택용 가정용 누전 차단기 30A 소형 차단기교체 주택용(가정용) 누전차단기_01)2P 30A 소형 그린전기조명'</li><li>'국산 필수전공 요비선 1선 빨강 5m 낚시대요비선_6.0MM_10EA/SET 피에스툴즈'</li><li>'캠핑 릴선 15m 4구 밀리터리 캠핑 오토캠핑 니드선 전기선 연장선 2구_카키_15m 롱캠'</li></ul> | | 13.0 | <ul><li>'다우실 707 바이오 방수 실리콘 투명 박스 25개입 주방 싱크대 욕실 욕조 코킹 방화용_다우 1199S 흑색 모멘트리 (MOMENTREE)'</li><li>'벡스 WD-40 낱개 360ml 신형 스마트 스트로우 SS형 WD40 벡스 WD-40 360ml(일반형) 모멘트리 (MOMENTREE)'</li><li>'철물팩토리 백색 줄눈 백시멘트 욕실 화장실 변기 보수 메지 크랙 접착제 D형 고무헤라 선진하드웨어'</li></ul> | ## Evaluation ### Metrics | Label | Metric | |:--------|:-------| | **all** | 0.6114 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_lh2") # Run inference preds = model("토네이도 다이아몬드 융착코어비트 폴리싱 대리석 천공 TQ5 57_TTC 17 주식회사 투엑스") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 10.7474 | 27 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 50 | | 1.0 | 50 | | 2.0 | 50 | | 3.0 | 50 | | 4.0 | 50 | | 5.0 | 50 | | 6.0 | 50 | | 7.0 | 50 | | 8.0 | 50 | | 9.0 | 50 | | 10.0 | 50 | | 11.0 | 50 | | 12.0 | 50 | | 13.0 | 50 | | 14.0 | 50 | | 15.0 | 50 | | 16.0 | 50 | | 17.0 | 50 | | 18.0 | 50 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (20, 20) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 40 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0067 | 1 | 0.3954 | - | | 0.3356 | 50 | 0.3839 | - | | 0.6711 | 100 | 0.2913 | - | | 1.0067 | 150 | 0.2101 | - | | 1.3423 | 200 | 0.1066 | - | | 1.6779 | 250 | 0.0475 | - | | 2.0134 | 300 | 0.0342 | - | | 2.3490 | 350 | 0.0274 | - | | 2.6846 | 400 | 0.028 | - | | 3.0201 | 450 | 0.029 | - | | 3.3557 | 500 | 0.0291 | - | | 3.6913 | 550 | 0.0258 | - | | 4.0268 | 600 | 0.0202 | - | | 4.3624 | 650 | 0.0085 | - | | 4.6980 | 700 | 0.0124 | - | | 5.0336 | 750 | 0.0039 | - | | 5.3691 | 800 | 0.0089 | - | | 5.7047 | 850 | 0.0063 | - | | 6.0403 | 900 | 0.0034 | - | | 6.3758 | 950 | 0.0046 | - | | 6.7114 | 1000 | 0.008 | - | | 7.0470 | 1050 | 0.0048 | - | | 7.3826 | 1100 | 0.0028 | - | | 7.7181 | 1150 | 0.0042 | - | | 8.0537 | 1200 | 0.0019 | - | | 8.3893 | 1250 | 0.0008 | - | | 8.7248 | 1300 | 0.0004 | - | | 9.0604 | 1350 | 0.0003 | - | | 9.3960 | 1400 | 0.0003 | - | | 9.7315 | 1450 | 0.0002 | - | | 10.0671 | 1500 | 0.0003 | - | | 10.4027 | 1550 | 0.0002 | - | | 10.7383 | 1600 | 0.0001 | - | | 11.0738 | 1650 | 0.0002 | - | | 11.4094 | 1700 | 0.0001 | - | | 11.7450 | 1750 | 0.0001 | - | | 12.0805 | 1800 | 0.0001 | - | | 12.4161 | 1850 | 0.0001 | - | | 12.7517 | 1900 | 0.0001 | - | | 13.0872 | 1950 | 0.0001 | - | | 13.4228 | 2000 | 0.0001 | - | | 13.7584 | 2050 | 0.0001 | - | | 14.0940 | 2100 | 0.0001 | - | | 14.4295 | 2150 | 0.0001 | - | | 14.7651 | 2200 | 0.0001 | - | | 15.1007 | 2250 | 0.0001 | - | | 15.4362 | 2300 | 0.0001 | - | | 15.7718 | 2350 | 0.0001 | - | | 16.1074 | 2400 | 0.0001 | - | | 16.4430 | 2450 | 0.0001 | - | | 16.7785 | 2500 | 0.0001 | - | | 17.1141 | 2550 | 0.0001 | - | | 17.4497 | 2600 | 0.0001 | - | | 17.7852 | 2650 | 0.0001 | - | | 18.1208 | 2700 | 0.0001 | - | | 18.4564 | 2750 | 0.0001 | - | | 18.7919 | 2800 | 0.0001 | - | | 19.1275 | 2850 | 0.0001 | - | | 19.4631 | 2900 | 0.0001 | - | | 19.7987 | 2950 | 0.0001 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0.dev0 - Sentence Transformers: 3.1.1 - Transformers: 4.46.1 - PyTorch: 2.4.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.20.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
gavrilstep/7ee0829d-1698-4f93-a007-906301fbdd27
gavrilstep
"2025-01-22T23:55:49Z"
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Solar-10b-32k", "base_model:adapter:NousResearch/Yarn-Solar-10b-32k", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-01-22T22:01:12Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Solar-10b-32k tags: - axolotl - generated_from_trainer model-index: - name: 7ee0829d-1698-4f93-a007-906301fbdd27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Solar-10b-32k bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 096fd30bd6933ba9_train_data.json ds_type: json format: custom path: /workspace/input_data/096fd30bd6933ba9_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: gavrilstep/7ee0829d-1698-4f93-a007-906301fbdd27 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/096fd30bd6933ba9_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: dac0422d-dfc9-4936-8702-c9f9cc41c22b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: dac0422d-dfc9-4936-8702-c9f9cc41c22b warmup_steps: 10 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 7ee0829d-1698-4f93-a007-906301fbdd27 This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | nan | | 0.0 | 0.0003 | 5 | nan | | 0.0 | 0.0005 | 10 | nan | | 0.0 | 0.0008 | 15 | nan | | 0.0 | 0.0011 | 20 | nan | | 0.0 | 0.0013 | 25 | nan | | 0.0 | 0.0016 | 30 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
utahnlp/ag_news_facebook_opt-1.3b_seed-2
utahnlp
"2024-04-04T19:25:26Z"
103
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-04T19:23:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
andrew45/xlm-roberta-base-finetuned-panx-de
andrew45
"2023-09-25T17:54:26Z"
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-09-25T16:27:27Z"
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8631973639359163 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1341 - F1: 0.8632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2754 | 1.0 | 525 | 0.1635 | 0.8128 | | 0.1302 | 2.0 | 1050 | 0.1371 | 0.8567 | | 0.083 | 3.0 | 1575 | 0.1341 | 0.8632 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Chuanming/sd-class-butterflies-32
Chuanming
"2024-06-20T05:01:41Z"
9
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2024-06-20T05:01:35Z"
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Chuanming/sd-class-butterflies-32') image = pipeline().images[0] image ```
appvoid/v-5
appvoid
"2024-05-09T22:28:45Z"
133
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:merge:appvoid/palmer-003", "base_model:raidhon/coven_tiny_1.1b_32k_orpo_alpha", "base_model:merge:raidhon/coven_tiny_1.1b_32k_orpo_alpha", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-09T22:27:33Z"
--- base_model: - raidhon/coven_tiny_1.1b_32k_orpo_alpha - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [raidhon/coven_tiny_1.1b_32k_orpo_alpha](https://huggingface.co/raidhon/coven_tiny_1.1b_32k_orpo_alpha) * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: appvoid/palmer-003 - model: raidhon/coven_tiny_1.1b_32k_orpo_alpha merge_method: slerp base_model: appvoid/palmer-003 dtype: float16 parameters: t: [0, 0.25, 0, 0.25, 0] ```
lesso17/1de3aab4-c0c0-44fe-abc7-26cb39aad17d
lesso17
"2025-01-30T15:25:09Z"
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:Intel/neural-chat-7b-v3-3", "base_model:adapter:Intel/neural-chat-7b-v3-3", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-30T15:08:03Z"
--- library_name: peft license: apache-2.0 base_model: Intel/neural-chat-7b-v3-3 tags: - axolotl - generated_from_trainer model-index: - name: 1de3aab4-c0c0-44fe-abc7-26cb39aad17d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Intel/neural-chat-7b-v3-3 bf16: auto chat_template: llama3 datasets: - data_files: - de00ac94acf20a52_train_data.json ds_type: json format: custom path: /workspace/input_data/de00ac94acf20a52_train_data.json type: field_instruction: text field_output: caption format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/1de3aab4-c0c0-44fe-abc7-26cb39aad17d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/de00ac94acf20a52_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 46a15307-57b3-48a2-9469-3c15b8c829ce wandb_project: new-01-29 wandb_run: your_name wandb_runid: 46a15307-57b3-48a2-9469-3c15b8c829ce warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 1de3aab4-c0c0-44fe-abc7-26cb39aad17d This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.1719 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
sharkky07/test
sharkky07
"2024-11-04T09:00:16Z"
5
0
null
[ "safetensors", "gguf", "llama", "unsloth", "trl", "sft", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-11-04T08:24:48Z"
--- license: mit tags: - unsloth - trl - sft ---
hotmailuser/QwenSlerp-3B
hotmailuser
"2025-01-01T10:53:57Z"
79
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Ahdoot/StructuredThinker-v0.3-MoreStructure", "base_model:merge:Ahdoot/StructuredThinker-v0.3-MoreStructure", "base_model:bunnycore/QwQen-3B-LCoT", "base_model:merge:bunnycore/QwQen-3B-LCoT", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-01T10:47:16Z"
--- base_model: - bunnycore/QwQen-3B-LCoT - Ahdoot/StructuredThinker-v0.3-MoreStructure library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [bunnycore/QwQen-3B-LCoT](https://huggingface.co/bunnycore/QwQen-3B-LCoT) * [Ahdoot/StructuredThinker-v0.3-MoreStructure](https://huggingface.co/Ahdoot/StructuredThinker-v0.3-MoreStructure) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Ahdoot/StructuredThinker-v0.3-MoreStructure - model: bunnycore/QwQen-3B-LCoT merge_method: slerp base_model: Ahdoot/StructuredThinker-v0.3-MoreStructure dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
isspek/xlnet-base-cased_zika_ebola_4_2e-5_16
isspek
"2024-11-24T11:18:01Z"
118
0
transformers
[ "transformers", "safetensors", "xlnet", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-24T11:17:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tanluka/Stranger
Tanluka
"2023-11-07T09:49:19Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-11-07T09:49:19Z"
--- license: creativeml-openrail-m ---
roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1-Q3_K_M-GGUF
roleplaiapp
"2025-01-31T02:52:15Z"
20
0
transformers
[ "transformers", "gguf", "3-bit", "Q3_K_M", "deepsauerhuatuoskywork", "llama", "llama-cpp", "text-generation", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
"2025-01-31T02:51:57Z"
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - Q3_K_M - deepsauerhuatuoskywork - gguf - llama - llama-cpp - text-generation --- # roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1-Q3_K_M-GGUF **Repo:** `roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1-Q3_K_M-GGUF` **Original Model:** `DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1` **Quantized File:** `DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B.i1-Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
mlx-community/MedFound-Llama3-8B-finetuned
mlx-community
"2025-01-26T05:18:41Z"
36
0
mlx
[ "mlx", "safetensors", "llama", "medical", "base_model:medicalai/MedFound-Llama3-8B-finetuned", "base_model:quantized:medicalai/MedFound-Llama3-8B-finetuned", "license:afl-3.0", "4-bit", "region:us" ]
null
"2025-01-26T05:16:26Z"
--- base_model: medicalai/MedFound-Llama3-8B-finetuned license: afl-3.0 tags: - medical - mlx --- # mlx-community/MedFound-Llama3-8B-finetuned The Model [mlx-community/MedFound-Llama3-8B-finetuned](https://huggingface.co/mlx-community/MedFound-Llama3-8B-finetuned) was converted to MLX format from [medicalai/MedFound-Llama3-8B-finetuned](https://huggingface.co/medicalai/MedFound-Llama3-8B-finetuned) using mlx-lm version **0.20.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/MedFound-Llama3-8B-finetuned") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
1231czx/7b_dpo_iter1_step150_beta01
1231czx
"2024-07-03T13:54:20Z"
9
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-03T13:47:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
reddest-panda/AutoMAD-RL-1
reddest-panda
"2024-12-16T13:58:10Z"
79
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-11-25T13:41:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hartman23/ppo-LunarLander-v2
hartman23
"2025-01-14T20:40:05Z"
5
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2024-11-18T16:13:51Z"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.68 +/- 41.60 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 5000000 'learning_rate': 0.005 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.001 'vf_coef': 0.75 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'hartman23/ppo-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf
RichardErkhov
"2024-10-28T17:16:42Z"
41
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-10-28T09:34:37Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MSM-MS-Cydrion-22B - GGUF - Model creator: https://huggingface.co/Steelskull/ - Original model: https://huggingface.co/Steelskull/MSM-MS-Cydrion-22B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MSM-MS-Cydrion-22B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q2_K.gguf) | Q2_K | 7.7GB | | [MSM-MS-Cydrion-22B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K_S.gguf) | Q3_K_S | 8.98GB | | [MSM-MS-Cydrion-22B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K.gguf) | Q3_K | 10.02GB | | [MSM-MS-Cydrion-22B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K_M.gguf) | Q3_K_M | 10.02GB | | [MSM-MS-Cydrion-22B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K_L.gguf) | Q3_K_L | 10.92GB | | [MSM-MS-Cydrion-22B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.IQ4_XS.gguf) | IQ4_XS | 11.22GB | | [MSM-MS-Cydrion-22B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_0.gguf) | Q4_0 | 11.71GB | | [MSM-MS-Cydrion-22B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.IQ4_NL.gguf) | IQ4_NL | 11.83GB | | [MSM-MS-Cydrion-22B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_K_S.gguf) | Q4_K_S | 11.79GB | | [MSM-MS-Cydrion-22B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_K.gguf) | Q4_K | 12.43GB | | [MSM-MS-Cydrion-22B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_K_M.gguf) | Q4_K_M | 12.43GB | | [MSM-MS-Cydrion-22B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_1.gguf) | Q4_1 | 12.99GB | | [MSM-MS-Cydrion-22B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_0.gguf) | Q5_0 | 14.27GB | | [MSM-MS-Cydrion-22B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_K_S.gguf) | Q5_K_S | 14.27GB | | [MSM-MS-Cydrion-22B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_K.gguf) | Q5_K | 14.64GB | | [MSM-MS-Cydrion-22B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_K_M.gguf) | Q5_K_M | 14.64GB | | [MSM-MS-Cydrion-22B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_1.gguf) | Q5_1 | 15.56GB | | [MSM-MS-Cydrion-22B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q6_K.gguf) | Q6_K | 17.0GB | | [MSM-MS-Cydrion-22B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q8_0.gguf) | Q8_0 | 22.02GB | Original model description: --- base_model: - unsloth/Mistral-Small-Instruct-2409 - Steelskull/Merged-v2 - TheDrummer/Cydonia-22B-v1.1 - ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - nbeerbower/Mistral-Small-Gutenberg-Doppel-22B - rAIfle/Acolyte-22B library_name: transformers tags: - merge license: apache-2.0 --- <!DOCTYPE html> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; } .container { width: 80% auto; max-width: 1080px auto; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; } .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; } .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } a { color: #88C0D0; text-decoration: none; } a:hover { color: #A3BE8C; } .button { display: inline-block; background-color: #5E81AC; color: #E5E9F0; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #81A1C1; } pre { background-color: #2E3440; padding: 10px; border-radius: 5px; overflow-x: auto; } code { font-family: 'Courier New', monospace; color: #D8DEE9; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>MSM-MS-Cydrion-22B Data Card</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> </head> <body> <div class="container"> <div class="header"> <h1>MSM-MS-Cydrion-22B</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/P6Cdc590xEGjWH3rKXDe5.jpeg"> <p>Meet Cydrion, the attempt of fusion for creativity and intelligence.</p> <p><strong>Creator:</strong> <a href="https://huggingface.co/Steelskull" target="_blank">SteelSkull</a></p> <h1>About Cydrion-22B:</h1> <pre><code>Name Legend: MSM = Mistral-Small MS = Model Stock 22b = its 22b </code></pre> <p>This model merges the robust storytelling of Cydonia with the creative edge of Acolyte, ArliAI-RPMax, and Gutenberg with some special sauce. <p>Use Mistral Format</p> <h2>Quants:</h2> <p>My Quants:<a href="https://huggingface.co/SteelQuants/MSM-MS-Cydrion-22B-Q6_K-GGUF" target="_blank">MSM-MS-Cydrion-22B-Q6_K-GGUF</a></p> <h3>Config:</h3> <pre><code>MODEL_NAME = "MSM-MS-Cydrion-22B" yaml_config = """ base_model: Steelskull/Merged-v2 merge_method: model_stock dtype: bfloat16 models: - model: TheDrummer/Cydonia-22B-v1.1 - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - model: nbeerbower/Mistral-Small-Gutenberg-Doppel-22B - model: rAIfle/Acolyte-22B """ </code></pre> <p><strong>If you wish to support:</strong></p> </div> <div class="donation-section"> <a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank"> <img height="36" style="border:0px;height:36px;" src="https://storage.ko-fi.com/cdn/kofi2.png?v=3" border="0" alt="Buy Me a Coffee at ko-fi.com" /> </a> </div> </div> </div> </body> </html>
Litzy619/G0521HMA26H2
Litzy619
"2024-05-21T17:29:50Z"
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:gemma", "region:us" ]
null
"2024-05-21T16:13:00Z"
--- license: gemma base_model: google/gemma-2b tags: - generated_from_trainer model-index: - name: G0521HMA26H2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0521HMA26H2 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7533 | 0.09 | 10 | 1.4230 | | 1.0512 | 0.18 | 20 | 0.5986 | | 0.3803 | 0.27 | 30 | 0.1989 | | 0.1564 | 0.36 | 40 | 0.1744 | | 0.147 | 0.45 | 50 | 0.2292 | | 0.1479 | 0.54 | 60 | 0.2788 | | 0.1448 | 0.63 | 70 | 0.1496 | | 0.1444 | 0.73 | 80 | 0.1560 | | 0.1358 | 0.82 | 90 | 0.1688 | | 0.1287 | 0.91 | 100 | 0.1792 | | 0.1314 | 1.0 | 110 | 0.1689 | | 0.1221 | 1.09 | 120 | 0.1936 | | 0.111 | 1.18 | 130 | 0.1777 | | 0.1187 | 1.27 | 140 | 0.1412 | | 0.1216 | 1.36 | 150 | 0.2297 | | 0.1161 | 1.45 | 160 | 0.1417 | | 0.1134 | 1.54 | 170 | 0.1235 | | 0.1096 | 1.63 | 180 | 0.1318 | | 0.1106 | 1.72 | 190 | 0.1812 | | 0.1113 | 1.81 | 200 | 0.1179 | | 0.1089 | 1.9 | 210 | 0.1297 | | 0.109 | 1.99 | 220 | 0.1219 | | 0.0944 | 2.08 | 230 | 0.1428 | | 0.0972 | 2.18 | 240 | 0.1483 | | 0.089 | 2.27 | 250 | 0.1814 | | 0.0906 | 2.36 | 260 | 0.2539 | | 0.0936 | 2.45 | 270 | 0.2099 | | 0.0864 | 2.54 | 280 | 0.2426 | | 0.0835 | 2.63 | 290 | 0.1394 | | 0.0865 | 2.72 | 300 | 0.1288 | | 0.0923 | 2.81 | 310 | 0.1242 | | 0.0898 | 2.9 | 320 | 0.1233 | | 0.0923 | 2.99 | 330 | 0.1233 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
sail-rvc/cputh1
sail-rvc
"2023-07-14T07:37:14Z"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:36:14Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # cputh1 ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:37:14 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
emretopcuu/deepseek-mcdc-finetuned
emretopcuu
"2025-02-28T12:53:15Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-02-28T12:53:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jimbowyer123/OtterWords-4bit
jimbowyer123
"2025-03-12T15:17:51Z"
0
0
null
[ "safetensors", "qwen2", "unsloth", "trl", "sft", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-03-12T15:14:10Z"
--- license: mit tags: - unsloth - trl - sft ---
codead6/therapist_model_0905
codead6
"2024-09-05T10:29:44Z"
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-09-05T10:24:54Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0x0uncle0/uncle59
0x0uncle0
"2024-04-07T23:02:21Z"
102
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-07T22:33:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
elybes/IFRS_en_ar_translation
elybes
"2024-08-13T20:39:10Z"
6
1
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "finance", "IFRS", "translation", "ar", "en", "dataset:elybes/IFRS", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2024-07-30T09:09:53Z"
--- datasets: - elybes/IFRS language: - ar - en metrics: - bleu pipeline_tag: translation tags: - finance - IFRS - translation ---
alirzb/S1_M1_R1_ViT_42616100
alirzb
"2024-01-06T11:59:32Z"
4
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-01-06T10:38:52Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S1_M1_R1_ViT_42616100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S1_M1_R1_ViT_42616100 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0078 - Accuracy: 0.9971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0113 | 1.0 | 304 | 0.0067 | 0.9980 | | 0.0065 | 2.0 | 608 | 0.0040 | 0.9980 | | 0.0031 | 3.0 | 912 | 0.0152 | 0.9961 | | 0.0 | 4.0 | 1217 | 0.0091 | 0.9971 | | 0.0 | 5.0 | 1520 | 0.0078 | 0.9971 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
stevemobs/deberta-base-finetuned-squad1
stevemobs
"2022-05-22T19:54:06Z"
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
"2022-05-22T16:18:34Z"
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: deberta-base-finetuned-squad1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-squad1 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.8037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.7928 | 1.0 | 7380 | 0.7810 | | 0.5795 | 2.0 | 14760 | 0.8037 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
lesso11/a0ba1198-84a6-42df-af5f-2dae56d84cff
lesso11
"2025-03-31T05:26:29Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4", "base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4", "region:us" ]
null
"2025-03-31T03:30:10Z"
--- library_name: peft base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4 tags: - axolotl - generated_from_trainer model-index: - name: a0ba1198-84a6-42df-af5f-2dae56d84cff results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5dcb8364a26f54d3_train_data.json ds_type: json format: custom path: /workspace/input_data/5dcb8364a26f54d3_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso11/a0ba1198-84a6-42df-af5f-2dae56d84cff hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000211 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/5dcb8364a26f54d3_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 110 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 42336fd3-a4e5-421e-9cbc-36b9d9c17001 wandb_project: 11a wandb_run: your_name wandb_runid: 42336fd3-a4e5-421e-9cbc-36b9d9c17001 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a0ba1198-84a6-42df-af5f-2dae56d84cff This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000211 - train_batch_size: 4 - eval_batch_size: 4 - seed: 110 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 2.4932 | | 0.9592 | 0.1531 | 500 | 0.9485 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
stanpony/tinylm33M-stella-1sent_5clust-2025-04-04-00-59
stanpony
"2025-04-04T01:06:23Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:roneneldan/TinyStories-33M", "base_model:finetune:roneneldan/TinyStories-33M", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-04T01:00:15Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
smp-hub/mit_b1.imagenet
smp-hub
"2025-01-15T17:55:44Z"
36
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "image-classification", "pytorch", "mit", "license:other", "region:us" ]
image-classification
"2025-01-15T09:46:15Z"
--- library_name: segmentation-models-pytorch license: other pipeline_tag: image-classification tags: - segmentation-models-pytorch - image-classification - pytorch - mit languages: - python --- # Model card for mit_b1. This repository contains the `imagenet` pre-trained weights for the `mit_b1` model used as encoder in the [segmentation-models-pytorch](https://github.com/qubvel-org/segmentation_models.pytorch) library. ### Example usage: 1. Install the library: ```bash pip install segmentation-models-pytorch ``` 2. Use the encoder in your code: ```python import segmentation_models_pytorch as smp model = smp.Unet("mit_b1", encoder_weights="imagenet") ``` ### References - Github: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ - Original weights URL: https://github.com/qubvel/segmentation_models.pytorch/releases/download/v0.0.2/mit_b1.pth
bangla-speech-processing/bangla_tts_male
bangla-speech-processing
"2023-11-28T03:27:51Z"
10
1
transformers
[ "transformers", "arxiv:2305.12708", "arxiv:2106.06103", "arxiv:2005.11129", "license:mit", "endpoints_compatible", "region:us" ]
null
"2023-11-24T16:51:01Z"
--- license: mit --- # Bangla TTS The Bangla TTS was traning mono(male) speaker using Vit tts model. The paper is [ViT-TTS: Visual Text-to-Speech with Scalable Diffusion Transformer](https://arxiv.org/pdf/2305.12708.pdf) we used the coqui-ai🐸-toolkit for Bangla Text-to-Speech training as well as inference. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ea_BVSinWFy_9W2AH7NI55Ur0XO4Tr-a?usp=sharing) # Contributions * Collect various Bangla datasets from the internet some data are collected from Mozilla common voice dataset and train the model. * we’ve developed the Bangla Vits TTS(text to speech) system that we trained and used for reading various Bangla \ text with the highest performing State of the Art(SOTA) Bangla neural voice. # Dataset The Bangla Text-to-Speech (TTS) Team at IIT Madras has curated a Bangla Speech corpus, which has been meticulously processed for research purposes. The dataset has been downsampled to 22050 and reformatted from the original IITM annotation style to the LJSpeech format. This refined dataset, tailored for Bangla TTS, is accompanied by the weight files of the best-trained models. Researchers are encouraged to cite the corresponding paper, available at [Paper Link](https://aclanthology.org/2020.lrec-1.789.pdf), when utilizing this dataset in their research endeavors. The provided dataset and model weights contribute to the advancement of Bangla TTS research and serve as a valuable resource for further investigations in the field. [Dataset Link](https://www.kaggle.com/datasets/mobassir/comprehensive-bangla-tts) # Evaluation Mean Opinion Score(MOS) : 4.10 [MOS Calculation method](https://waywithwords.net/landing/mean-opinion-score-ratings-2/) # Inference For testing please check the end point integration [Github](https://github.com/saiful9379/Bangla_TTS) # References : 1. https://aclanthology.org/2020.lrec-1.789.pdf 2. https://arxiv.org/pdf/2106.06103.pdf 3. https://arxiv.org/abs/2005.11129 4. https://aclanthology.org/2020.emnlp-main.207.pdf 5. https://github.com/mobassir94
svorwerk/setfit-fine-tuned-demo-class
svorwerk
"2024-01-27T19:47:05Z"
52
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "region:us" ]
text-classification
"2024-01-27T17:48:35Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: 'Acquisition Id: ALOG; Ancotel; Asia Tone; Bit-Isle; IXEurope; Infomart; Itconic; OGS; SafeGuard; Switch and Data; Telecity; Verizon; Virtu; Zenium; nan' - text: 'Emp_FLSA: E; N; P; V; X; nan' - text: 'Emp Grade: A; AA; AS 1; AS1; AS2; AS3; B; C; COM; D; E; E I; E II; E10; E11; E12; E13; E7; E8; E9; J01; J02; J03; J04; JS; MM I; MM II; N01; N02; N03; N05; N06; N07; N08; NE 2; NE 3; NE 4; NE 5; NE 6; NE1; NE2; NE3; NE4; NF; NG; Non-exempt; S; SA I; SA II; SA III; SA4; SA5; SE I; SE II; SM I; SM II; SP; SS; nan' - text: 'Loc State: AB; AL; AR; AZ; BC; CA; CO; CT; DC; DE; FL; GA; GTL; GTM; Gujarat; HI; IA; ID; IL; IN; KS; KY; Karnataka; LA; MA; MD; ME; MI; MN; MO; MS; MT; Maharashtra; NB; NC; ND; NE; NH; NJ; NM; NS; NV; NY; OH; OK; ON; OR; PA; PR; QC; RI; SA; SC; SD; TN; TX; UT; VA; VT; WA; WI; WV; WY; nan' - text: 'Supervisory Organization: 3 PL & Warehousing Management (Harald Dehe); 3 PL Management (Ryoichi Imamura (Inherited)); 3A-Modul (Arnd Vollmerhausen (Inherited)); 3A-Modul (Erkan nder); 3A-Modul (Erkan √ñnder); ABN ( Antibodies/NAT) (Thomas Roten); AD&V - Assay Development & Validation (Svenja Wolff); AESD Aurora Build Elec & Warehouse (Chris Askitis); AM Ural, Siberia & Far East (Anna Tov); AM Volga & South (Natalia Ivanova); ANI Region ICO (Ivaylo Vladimirov); APAC Payroll (Rohit Jain); API Manufacturing (Luke McIntyre); API Manufacturing (Ryan Cox (Inherited)); API Manufacturing (Union) (Luke McIntyre (Inherited)); API Manufacturing (Union) (Ryan Cox (Inherited)); AQL (Jens Huft); AS&T (Jeff Hancock); ASQ Analytical Sciences & Technology (Michael Sch√ºmann); ASQ Biochemistry (Sven Karschnia); ASQ Chemistry (Kerstin Nske); ASQ Chemistry (Kerstin N√∂ske); ASQ Compliance & Release (Angelika Jung); ASQ Data Integrity (Christoph Kircher); ASQ External Materials & Support (Angelika Jung); ASQ External Materials & Support (Simone Lang); ASQ Microbiology (Thomas Bhler); ASQ Microbiology (Thomas B√ºhler); ASQ Potency (Sven Karschnia); ASQ Potency (Volker Gawantka (Inherited)); ATR Financial Planning (John Sullivan); ATR General Accounting (Tanja Roth); AUS Government & Media Relations (Sharon McHale); Abfllteam 1 (Heiko Brhl); Abfllteam 2 (Sorin Suditu); Abfllteam 3 (Dirk Fischer); Abfllteam 4 (Heiko Stein); Abfllteam 5 (Eduard Wegner); Abfllteam 6 (Maik Jesberg); Abfllteam 8 (Murat Midik); Abf√ºllteam 1 (Heiko Br√ºhl); Abf√ºllteam 2 (Sorin Suditu); Abf√ºllteam 3 (Dirk Fischer); Abf√ºllteam 4 (Heiko Stein); Abf√ºllteam 5 (Eduard Wegner); Abf√ºllteam 6 (Maik Jesberg); Abf√ºllteam 7 (Holger Kuhl (On Leave)); Abf√ºllteam 7 (Holger Kuhl); Abf√ºllteam 8 (Murat Midik); Abilene 701 (Ophelia M Cavalier); Abilene 701 (Sara M Schuppe); Abilene 701 ACM Area 1 (Traci Bradford); Abilene 701 ACM Area 2 (Joel Hutsell); Abilene 701 QA (Wes Scruggs); Account to Report (Andrew Croft (Inherited)); Account to Report (Linda Carducci (Inherited)); Account to Report (Michael Kochanski); Accounting & Reporting (Jeffrey H Schorr); Accounting (Annalisa Saracchi (Inherited)); Accounting (Paul Fellingham (Inherited)); Accounting France (Charlotte Rougi (Inherited)); Accounting France (Charlotte Rougi√© (Inherited)); Accounting Geschftsbuchhaltung (Jens Dettbarn); Accounting Gesch√§ftsbuchhaltung (Jens Dettbarn); Accounting Operations (Brian T Simeur); Accounting Operations (Denna DeWitt); Accounting Operations (Manish Devnani); Accounting Operations (Patrick Zuercher); Accounting Operations 1 (Denna DeWitt); Accounts Payable & Receivable (Yvonne Robson); Accounts Payable (Brian T Simeur (Inherited)); Accounts Payable (Mark Staniford); Accounts Payable (Susan Velazquez); Accounts Receivable (Stephen Fairhurst); Accounts Team Lead (Rebecca Merrett); Acquired Bleeding & Critical Care Group (Mitsuhiro Kuwahara); Acquired Bleeding & Critical Care Group (Takashi Ishijima ??? ?? - ???? ????); Acquired Bleeding TA (Naoki Ikeguchi); Acting Manager, Innovation R&D (Chi Ong); Adaptive Clinical Trial Technologies (Liz Quinlan); Administration (Franz Gr√ºn); Administration (Juli Zhu ????); Administration (Sandy Tan ); Administration (Sandy Tan ?????); Administration Italy (Antonella Carluccio); Administration Italy (Cristina Fabbro); Administration/Production Support Subject Matter Expert (Justin Ryan); Administration/Production Support Systems Officer (Adam Kraan); Advertising (Raimund Wybranietz); Akron 402 (Joe Jacko); Akron 402 ACM Area 1 (LeAnna M Mauger); Akron 402 ACM Area 2 (Tonya R Robinson); Akron 402 ACM Area 3 (Brett Ferrell); Akron 402 QA (Carrie Piggford); Akron 402 QA (Christine Thomas); Akron 402 QA (Esence Hambrick); AlbuRX (Tom Hubert); AlbuRx (Barry Lynch); AlbuRx Bulk (Aoife Corrigan); AlbuRx Bulk Mfg (Joel Rainey); AlbuRx Packing (Carlo Volpe); Albumin & IVIG Lyo Bulkmanufacturin (Irne Stempfel); Albumin & IVIG Lyo Bulkmanufacturin (Ir√®ne Stempfel); Albumin (Fritz Rentsch); Albumin (Juerg Hofmaenner); Albumin Bulk Team 1 (Fritz Rentsch); Albumin Bulk Team 2 (Andreas L√ºdi); Albuquerque 034 (Latoya K Gilbert-Torres); Albuquerque 034 ACM Area 1 (Dee Ulibarri); Albuquerque 034 ACM Area 2 (Gerardo Ruelas); Albuquerque 034 QA (Antoinette F Tapia); Albuquerque 137 (Preston W Minor); Albuquerque 137 ACM Area 1 (Brian Trujillo); Albuquerque 137 ACM Area 2 (Melisa R Cox); Albuquerque 137 QA (Daniel Venn); Albuquerque 137 QA (Danny Kinder); Albuquerque 137 QA (Kelsey Gaffney); Alcohol Inventory (Union) (Michael D Proctor); Allentown 294 (Susan Tudela); Allentown 294 QA (Mercy Cobbinah); Allergy National Sales Team (Lea Leon); Allergy National Sales Team (Lea Rajendran); Alliance and Governance (Andrea Lehmann); Alliance and Project Management Systems (Jos√© Maldonado); Amarillo 283 (Jerica Hunter); Amarillo 283 ACM Area 1 (Nicole Taylor); Amarillo 283 ACM Area 1 (ROBERT WILLIAMS); Amarillo 283 ACM Area 2 (Nicole Taylor); Amarillo 283 ACM Area 2 (ROBERT WILLIAMS); Amarillo 283 QA (Michael L Allen); America''s Service Desk (Delilah Harden); Americas HR Ops Tier 1 (Alana DeWeever); Americas HR Ops Tier 1 (Monica Silveira); Americas Service Delivery and Plasma Tech (David G Bersch); Americas Service Operations (Mary Jane McPherson); Analytical Development (Jayendra Shankar); Analytical Drug Product Development (Jiang Qian); Analytical Science & Technology (Oliver Lffler); Analytical Science & Technology (Oliver L√∂ffler); Analytical Science & Technology Holly Springs (Jeffrey Pederson (Inherited)); Analytical Science & Technology Holly Springs (Jessica Gambill); Analytical Science & Technology Liverpool (Jeff Hancock); Analytical Science & Technology Liverpool (Jeffrey Hancock); Analytical Science & Technology Parkville (Tim Karla); Analytical Services Quality (Simone Naruhn); Analytical Services Quality (Volker Gawantka); Analytical Software Technology 2 (Jan Bursy); Analytics R&D (Patrick Schuetz); Animal Services Manager (Phil Franchina); Animal Services Manager (Rachel Borg, Phil Franchina); Animal facility 2 (Elmar Raquet); Anlagensicherheit & Konzessionen (Jrgen Arnold); Anlagensicherheit & Konzessionen (J√ºrgen Arnold); Application Security (Riti Arya); Applications, Plateau (Mark S Mantarian); Applications, Plateau (Trevor Alcorn); Applications, Plateau I (Trevor Alcorn); Apprentices (Kevin Liechti); Apprentices and Trainees ES (Rolf Isenschmid); Apprenticeship (Sandra Zbinden); ArbEG (Beate Binsack); Arbeitssicherheit (zcan Campinar (Inherited)); Arbeitssicherheit (√ñzcan Campinar (Inherited)); Area Business Manager 725 (Danielle Traum); Area Business Manager 725 (Eva Merce Maldonado); Area Business Manager 725 (Nick Croton); Area Business Manager 726 (Cameron McCulloch); Area Business Manager 728 (Graham Cluley); Area Business Manager 781 (Danielle Traum); Area Business Manager 781 (Helen Kostopoulos); Area Business Manager 783 (Melissa Weier); Area Business Manager 785 (David Clapin); Area Business Manager 786 (David Brown); Area Business Manager 786 (Peter Moxham); Area Innovation Operations (Cole D Kimple); Argentina Cluster (Carmen Pereyra (Inherited)); Argentina Cluster (Carmen Rosa Pereyra Davila (Inherited)); Artwork Packaging (Ratana Lim); Arvada 129 (Colleen A Irish); Arvada 129 ACM Area 1 (Robert Young); Arvada 129 ACM Area 2 (Jason T Studtmann); Arvada 129 QA (Alfredo Castillo (On Leave)); Arvada 129 QA (Alfredo Castillo); Aseptic Cert (Anja Djordjevich); Aseptic Cert (Grace Luong); Aseptic Filling (Benjamin Dudok); Aseptic Filling I (Eveline Kindler (Inherited)); Aseptic Filling Team (Terry Shipway); Aseptic Processing & SIAG (Laurent Wagner); Aseptic Processing & SIAG (Steffen Korth); Asia Operations (Felix Soh); Asia Operations (Richard Kwan ?????); Asia Pacific Tax (Aoife Deane); Asia Pacific Tax (JOON YONG); Asia South Marketing (Natalie Ku); Asia South Medical Affairs (Narendra Patil); Asia-Pacific Business Integrity (Angelia Lee); Asia-Pacific Commercial Operations (Paul Li); Asia-Pacific Demand Planning (James Lee ?????); Asia-Pacific Marketing and Medical Affairs (Peter Chow); Asia/Pac Service Operations (Joe Razavi); Asia/Pac Tech (Juerg Clavadetscher (Inherited)); Assay Development and Analytics, Gene Therapy, Flow Cytometry (Ann George); Assay Development and Optimization I (Mirna Rapp); Assay Development and Optimization II (Rainer Straub); Assay Support Group (Stefan Kempf); Asset Compliance (Keith Champion); Assets Management and Governance (Stefano Siviero); Assistant Supply Chain Management (Manuela Lacher); Associate Director, R&D Ops (Christine Wadey); Associate Sales Director 798 (Ray Friedrich); Auburn 183 (Cherita Saulmarshall); Auburn 183 (Joshua Killingsworth); Auburn 183 ACM Area 1 (Tiffany Johnson); Auburn 183 ACM Area 2 (Ashley Bentley); Auburn 183 QA (Melodee C Ebel (Inherited)); Auburn 183 QA (Stephanie Baptiste); Auburn 183 QA (Timothy J Nisewonger); Audit & Compliance Management (Christina Berninger); Audit & Quality Risk Management (Christina Berninger); Audit & Quality Risk Management (Rainer Bier); Auditing and Inspections (Jenny Cavanagh); Auftragsvorbereitung & -koordination (Horst Kraus); Augusta 253 (Kristopher Collier); Augusta 253 ACM Area 1 (Kristopher Collier (Inherited)); Augusta 253 ACM Area 1 (Tomecia Tillman); Augusta 253 ACM Area 2 (Jonathan Lewis); Augusta 253 QA (Dave Anderson); Augusta 253 QA (Pamela DeLucia); Aurora 702 (Kevin J Lawley); Aurora 702 (Keyonna L Gray); Aurora 702 ACM Area 1 (Virginia L Garnica); Aurora 702 ACM Area 2 (Theresa M Couture); Aurora 702 QA (Fernanda Nistal); Aurora 702 QA (Nan Nistal); Automated VI (David Kuhs); Automation (Adrian Marti); Automation (Christopher Parlane); Automation (Frank Mastellone); Automation (Jrgen Dersch); Automation (J√ºrgen Dersch); Automation (Stefan Sigrist); Automation Instrumentation (Ludovic Le Reste); Automation Systems Engineer (Magan Lai); Automation Systems Manager (Cornell D''Couto); Automation and Electrical Systems (Lou Corvetti); Automation and Electrical Systems (Matt Downey); Automation and Electrical Systems (Zoran Hadzi-Nikolov); Automatisierungstechnik (Andreas Tement); Automatisierungstechnik (Jens Laucht); BCI Team 1 (Frank Ludwig); BCI Team 2 (Markus Plociennik); BCI Team 2 (Ralf Kolley); BCI Team 3 (Torsten Hrtl); BCI Team 3 (Torsten H√§rtl); BFX8 (Donnie Daugherty); BFX8 (Victor Vazquez); BMS (Jan Klee); BPA Holly Springs (Luke McMahon); BPA Holly Springs (Paul Parske); BPA Liverpool (Andrew Holland); BRM Batch Release Management (Joachim Leiss); BRR & QoF (Natalie Windel); BRS Batch Release Support (Hans-Tobias Deinzer); BT - Quality & Manufacturing Applications (Robert Price); BT Applications (BI-Analytics) (John Thompson (Inherited)); BT Applications (BI-Analytics) II (Johnny F Helms Jr); BT Applications (BI/Analytics) (Johnny F Helms Jr); BT Applications (Bern) (Andrew Matys); BT Applications (Business Applications) (Jesse R Crew); BT Applications (Coallaboration-BI Bern) (Christophe Fuchs); BT Applications (Coallaboration/BI Bern) (Christophe Badertscher); BT Applications (ComOps) (Natasha Reantillo); BT Applications (ComOps) 2 (Francis Azul); BT Applications (DMS) (Johannes Lichtenfels); BT Applications (DMS/Bern) (Johannes Lichtenfels (Inherited)); BT Applications (DMS/MBR) (Johannes Lichtenfels (Inherited)); BT Applications (Daniel R Rodgers); BT Applications (Data Services) (Thomas Walther (On Leave)); BT Applications (Data Services) (Thomas Walther); BT Applications (Data Warehouse) (Bhanu Vereddigari); BT Applications (Manuf.-Quality Bern) (Marcel Hadorn); BT Applications (Manuf./Quality Bern) (Marcel Hadorn); BT Applications (Sean O''Connor); BT Applications (Web Apps) (James Briggs); BT Applications (Web Ops) (Ross Bovenkerk); BT Applications BI (MBR) (Manuel Schaub); BT Applications CSL Plasma (Boca) (Cindy K Elliott); BT Applications CSL Plasma (MBR) (Gerhard Vogel); BT Applications CSL Plasma (MBR) (Hubert Diehl); BT Applications Corporate Functions (Kartik Tavargeri); BT Applications DMS (Boca) (Barbara L Youngling); BT Applications DMS (Boca) (Becky Heatherman); BT Applications DMS (Boca) (Brandi Kennedy); BT Applications DMS (Boca) (John Di Anni); BT Applications DMS (Boca) I (Barbara L Youngling); BT Applications DMS (Boca) II (Brandi Kennedy); BT Applications DMS (Boca) III (Malinda Hargitt); BT Applications EU (Markus Nickel); BT Applications Finance and Risk (Jesse R Crew (Inherited)); BT Applications LIMS & Local Apps (Boca) (Ram Jadvani); BT Applications Logic (DMS) (MBR) (Gerhard Vogel (Inherited)); BT Applications Manuf.-Quality MBR (Chris Camilleri); BT Applications Manuf./Quality MBR (Chris Camilleri); BT Applications Manuf./Quality MBR (Martin Hopp (Inherited)); BT Applications Quality (Andy Chung (Inherited)); BT Applications Quality (MBR) (Martin Hopp); BT Applications Quality (Ted Schmidt); BT Applications R&D (MBR) (Christoph Kraus); BT Applications, HRIS (Kent Riddell); BT Apprentices (Michel M√ºller); BT Apprentices (Ueli Niederhauser); BT Commercial Compliance Apps (Chunlei Liao); BT DEV Applications (Janek Geil); BT Enterprise Applications EPPM (Eltis Wong WongKeung Fai); BT Enterprise Applications ¬ñ EPPM (Elizabeth Cataldo); BT Enterprise Applications ¬ñ EPPM (Eltis Wong ?WongKeung Fai?); BT Enterprise Site Mgmt and Quality (Don Konemann); BT Infrastructure (AUS) (Michael Fugaro); BT Infrastucture (China Ruide) (Michael Fugaro (Inherited)); BT Operational Excellence (Jeffrey Walker); BT Operational Excellence (Markus Wotruba); BT Operations Parkville (Nick Witnish); BT Portfolio Management (Julia Naake); BT Quality (Astrid Tr√ºmper); BT Quality (BRN) (Valrie Progin-Meyer); BT Quality (BRN) (Val√©rie Progin-Meyer); BT Quality (MBR) (Jutta Weiss); BT Quality (Michelle Lemasters); BT Quality 2 (Jill L Rieken); BT Quality Applications (QMS) (Jeff Senley); BT Quality KoP (Chantelle Marie Otto); BT Quality Manager (Astrid Tr√ºmper (Inherited)); BT Quality Manager (Irene Ragel); BT Quality Manager (Travis Newing); BT Site Delivery (Sven Brske); BT Site Delivery (Sven Br√ºske); BT Source-To-Pay Apps (Charles Warman); BT Source-To-Pay Apps (Jochen Preis); BT Source-To-Pay Apps (Satish Mohan Gudipalli); BT operational excellence (Markus Wotruba); BT-Serialisation (Ramanand Lanka); BTQ Biotech Quality (Barbara Wegscheid); BTQ Biotech Quality (Ulrich Eberhard); Bacteriology (Benny Hung); Bacteriology (Karthy Santhosh); Bacteriology (Niharika Pathak); Baltimore 166 (Mario A Salas); Baltimore 166 (Robert Centennial); Baltimore 166 ACM Area 1 (Dami Alimi); Baltimore 166 ACM Area 2 (Gary Rivers Jr); Baltimore 166 QA (Briesha Smith); Baltimore 166 QA (Monica Brown (On Leave)); Baltimore 166 QA (Monica Brown); Base Fractionation (Anthony Kaye); Base Fractionation (Brendan Hilliard); Base Fractionation (Ernest Shepard (Inherited)); Base Fractionation (George Makris); Base Fractionation (Parish McKenzie); Base Fractionation (Roy Taylor); Base Fractionation Operations (Shane Bourne); Batch Release (Anthony Day); Batch Release Management (Constanze Buchter); Batch Release Management (Nicole Ast); Batch Release Support (Christine Schneider); Batch Release Support QAI (Daniel Kalb); Baytown 086 (Brian T Edwards); Baytown 086 (Darriel Clark); Baytown 086 ACM Area 1 (Rachel I Ramirez); Baytown 086 ACM Area 1 (Tara West); Baytown 086 ACM Area 2 (Elizabeth Morales); Baytown 086 ACM Area 2 (Rebecca Rogers); Baytown 086 QA (Monica Banuelos); Beloit 005 (Jesus A Castillo (Inherited)); Beloit 005 (Kristin R Swain); Beloit 005 ACM Area 1 (Eric M Cook); Beloit 005 ACM Area 2 (John Miles); Beloit 005 QA (Patty Justus); Berufliche Erstausbildung (Carmen Walldorf); Berufliche Erstausbildung (Doris Nake); BioAnalytical Sciences Routine (Ashraf Raza); Bioanalytical Sciences (Aaron Hahn); Bioanalytical Sciences (Alice Andreu); Bioanalytical Sciences (Andreas Meister); Bioanalytical Sciences (Bo An); Bioanalytical Sciences (Christophe Pical); Bioanalytical Sciences (Clare Elizabeth Shepherd); Bioanalytical Sciences (Craig Kyngdon); Bioanalytical Sciences (Cristina Baker); Bioanalytical Sciences (Cristina Torres-Arce); Bioanalytical Sciences (David Boerema); Bioanalytical Sciences (Jennifer La); Bioanalytical Sciences (Laura Cortes Castrillon); Bioanalytical Sciences (Lee Xin Chong); Bioanalytical Sciences (Lucy Cao (Inherited)); Bioanalytical Sciences (Lucy Cao ); Bioanalytical Sciences (Lucy Cao ?????); Bioanalytical Sciences (Michael Johnston) (52388204); Bioanalytical Sciences (Ralf Ottofuelling); Bioanalytical Sciences (Rodney Holmes); Bioanalytical Sciences (Saw Yen Ow); Bioanalytical Sciences (Theresa Qiu); Bioanalytical Sciences (Vincent Strangis); Bioanalytical Sciences, LCM (Minyao Tang ?????); Bioanalytical Sciences, Lab Ops (Jinsong Zhao ?????); Bioanalytics & Fermentation (Partho Halder); Bioanalytics, Gene Therapy (Gene-Errol Ringpis); Bioassay Group (Souravi Ghosh); Biochemical Quality Control (Andreas Affolter); Biochemical Quality Control (BCAD) (Mirjam Kuehne Sebaste); Biochemical Quality Control (BCQC) (Kathrin Minnig Gsponer); Biochemical Quality Control (BCQC) (Sten Strunze); Biochemical Quality Control (Sandra Kampczyk); Biochemical Quality Control (Sten Strunze); Biochemistry (Bjrn Hegemann); Biochemistry (Bj√∂rn Hegemann); Biochemistry (Marius Loetscher); Biochemistry (Monika Edler); Biochemistry 4 (Thomas Gentinetta); Bioinformatics & AI (Arthur Hsu); Bioinformatics & AI (Monther Alhamdoosh); Biological Analytcs R&D (Roland Zehnder); Biological Analytical Development (Simon Urwyler); Biological Quality Control (BQC) (Michael Molitor); Biologielaboranten (Carmen Walldorf (Inherited)); Biologielaboranten (Doris Nake (Inherited)); Biology Animal Group (Preston Eilders); Biology Lab (Catherine A Moody); Biology Lab (Preston Eilders); Biology Lab I (Preston Eilders); Biology Quality Control (BQC) (Michael Molitor); Bioprocess Development & Innovation (Erik Hinze); Bioprocess Development (Vicky Pirzas); Bioreactor Development (Sara Ladd); Bioreactor Development (Tizita Horning); Bioreactor Development 1 (Tony Hunt); Bioreactor Development 2 (Eric Zhu); Biostatistics Transplantation (Aparna Raychaudhuri (Inherited)); Biostatistics & Medical Writing, R&D JAPAN (Takashi Fukai - ); Biostatistics & Medical Writing, R&D JAPAN (Takashi Fukai ??? ?? - ??? ????); Biostatistics (Fang Xie); Biostatistics (Michael Fries); Biostatistics - Aquired Bleeding, Coagulation, Respiratory (Michael Fries); Biostatistics - Cardiovascular (Mark Heise); Biostatistics - Immunology and Inflammation (John-Philip Lawo); Biostatistics and Medical Writing (LOTHAR TREMMEL); Biostatistics and Medical Writing (Lothar Tremmel); Biostatistics ¬ñ Cardiovascular and Metabolic (Mark Heise); Biostatistics ¬ñ Innovation (Sergei Leonov); Biostatistics ¬ñ Transplantation (Aparna Raychaudhuri); Biostatistics ¬ñ Transplantation (Fang Xie (Inherited)); Biostatistik (Marcel Mischnik); Biotech Manufactuirng Facility (Christoph Hau√ümann); Biotech Manufactuirng Facility (Philip Elliott); Biotech Manufacturing (Aleksandar Hristov); Biotech Manufacturing (Brett Bohr); Biotech Manufacturing (Fraser Goodwin); Biotech Manufacturing (Peter Cloyd Belandres); Biotech Manufacturing (Peter Florey); Biotech Manufacturing (Steven Jacovou); Biotech Manufacturing 1 (Steven Jacovou); Birmingham 259 (Sam Whitehead); Birmingham 259 ACM Area 1 (Meredith Y Lundie); Birmingham 259 ACM Area 2 (AJ Johnson); Birmingham 259 QA (Noelle Teague); Bloomington 127 (Ashley B Fearnall); Bloomington 127 (Xang Vang); Bloomington 127 ACM Area 1 (Loryna Williams); Bloomington 127 ACM Area 2 (Kayla L Stueber); Bloomington 127 ACM Area 2 (Kirsten M Heller); Bloomington 127 QA (Karen R Soderberg); Bloomington 241 (Kevin Smith); Bloomington 241 ACM Area 1 (Anna Whitman); Bloomington 241 ACM Area 2 (Kevin Smith (Inherited)); Bloomington 241 ACM Area 2 (Michele Morison); Bloomington 241 QA (Ben Samarripas (Inherited)); Bloomington 241 QA (Ryan Caudill-Laughlin); Boca Field Services (Javier Lopez); Boca Field Services (Julio Feliciano); Boise 227 (Ash Senters); Boise 227 (Ashley Senters); Boise 227 (Carl Seelert); Boise 227 (Timothy Freeland Jr (Inherited)); Boise 227 ACM Area 1 (Camille Snow); Boise 227 ACM Area 2 (Travis Richardson (On Leave)); Boise 227 ACM Area 2 (Travis Richardson); Boise 227 QA (Derek Erhart (Inherited)); Boise 227 QA (Miles Veater); Brand Manager (Judith Vico); Brand Manager 745 (Genevieve Nihill); Breakthrough Technologies (Axel Dietrich); Breakthrough Technologies (Hung Pham); Breakthrough Technologies (Nathan J Brinkman); Breakthrough Technologies I (Laura Keigher); Brewer 503 (MacGregor Roy); Brewer 503 ACM Area 1 (Stephen R Coltharp); Brewer 503 ACM Area 2 (Katherine Ragia); Brewer 503 QA (Marc Stephens); Brownsville 113 (Jose L Dela Garza (Inherited)); Brownsville 113 (Nick Caballero); Brownsville 113 ACM Area 1 (Alfonso Gutierrez); Brownsville 113 ACM Area 2 (Robert Miranda); Brownsville 113 ACM Area 3 (Brenda Z Garcia); Brownsville 113 ACM Area 4 (Hector F Amaya); Brownsville 113 QA (Francisca Z Lopez); Brownsville 113 QA (Laura L Escalante); Brownsville 113 QA (Rosa E Mercado (Inherited)); Brownsville 114 (Anthony A Almaguer); Brownsville 114 (Osiel E Selvera); Brownsville 114 ACM Area 1 (Roy Madero); Brownsville 114 ACM Area 2 (Amanda Millan); Brownsville 114 ACM Area 3 (Melissa Medrano); Brownsville 114 ACM Area 4 (Maria A Garcia); Brownsville 114 QA (Joanna M Franco); Brownsville 135 (Francisca Z Lopez); Brownsville 135 (Nick Caballero); Brownsville 135 ACM Area 1 (Severita Williams); Brownsville 135 ACM Area 2 (Oralia M Vasquez); Brownsville 135 ACM Area 3 (Claudia Uribe Resendiz); Brownsville 135 QA (Alma E De Los Santos De Gonzalez); Brownsville 135 QA (Britney Castillo); Buffalo 239 (Nicholas Liberati); Buffalo 239 (Renaye Hecker); Buffalo 239 ACM Area 1 (Kimberly Lubecki); Buffalo 239 ACM Area 2 (Carol Palaszewski); Buffalo 239 QA (Nicholas Liberati); Buffalo 239 QA (Olivia Bejaran); Buffer Preparation (Benjamin Gr√ºn); Buffer-Production (Bernd Grau); Building 21 South (Brock A Boudreau); Building 21 South (Parish McKenzie (Inherited)); Building 21S (Union) (Brock A Boudreau); Building 21S (Union) (Parish McKenzie (Inherited)); Building 30 (Brock A Boudreau); Building 30 (Parish McKenzie (Inherited)); Building 30 (Union) (Brock A Boudreau); Building 30 (Union) (Parish McKenzie (Inherited)); Buildings & Prop Coord 256 (Ray Belli); Bulk (Markus Weber); Bulk Manufacturing (Freddie Wayne West); Bulk Manufacturing (Gregory Taylor); Bulk Manufacturing (Patricia Stewart (Inherited)); Bulk Manufacturing (Ryan Cox); Bulk Mechanical Team (Matthew Johnson (Inherited)); Bulk Mechanical Team (Mohamed Tubar); Bulk Mfg (Joel Rainey (Inherited)); Bulk Mfg (Katerina Petreska); Bulk Mfg (Mahmoud Lasheen); Bulk Mfg (Matt Thompson); Bulk Mfg (Tom Vick); Bulk Mfg (Tri Nguyen); Bulk Process Technology (Andreas Grter); Bulk Process Technology (Andreas Gr√ºter); Bulk Process Technology (Rene Boros); Bulk Utilities (Michael D Proctor); Burlington 512 (Lynn M Stratton); Burlington 512 ACM Area 1 (Kay Harris); Burlington 512 ACM Area 2 (Danica Johnson); Burlington 512 QA (Sharleen Dunn); Bus. Svcs, Project Edge (John Dinatale); Business (Camilla Shen); Business Analytics (Joseph Smith); Business Analytics (Venkatesh Ramakrishnan (Inherited)); Business Applications (Anu Thampi); Business Applications (Charles Lowe); Business Development (Evgeniy Glukhovskiy); Business Development (Simone Parkes); Business Insights & Analytics (Nitin Bhatnagar (Inherited)); Business Insights & Analytics (Shital Patel); Business Insights & Operations (Lynda Kulp); Business Integrity (Christine Zettlemoyer); Business Integrity Intercontinental (Bella Hovhannisyan Melkonyan (Rafael)); Business Integrity and Privacy Program (Sarah McHenry); Business Integrity and Risks (Karen Neave); Business Integrity and Risks (Kelly Scott); Business Operations (Harald Mller); Business Operations (Harald M√ºller); Business Operations (Laura Attride); Business Operations (Paul Jens); Business Operations EU (Heidi Sun); Business Partner Support (Anika Wagner); Business Partnering Holly Springs (Carey Vassallo); Business Partners (Christine Toth); Business Partners (Jacqueline Hawkins (Inherited)); Business Planning Group (Junya Morinaga - ); Business Planning Group (Junya Morinaga ??? ?? - ???? ?????); Business Planning Group (Makoto Miura (Inherited)); Business Planning Group (Yuichiro Sakagami); Business Process & Technology (Joseph Elicone); Business Process & Technology (Maureen Martini); Business Process (Christian Sonderegger); Business Process Excellence (Manuel Schaub); Business Process Excellence Global Training (Reed Johnston); Business Process Management (BPM) (GAFOOR SARANG); Business Process OTC (Kian Hartono); Business Process S2P (Simon Haemmerli); Business Processes & Data Mgt (Barbora ?√°chov√°); Business Processes & Data Mgt (Barbora chov); Business Processes (Boris Kaiser); Business Processes (Hans Raess); Business Processes (Thomas Romanus); Business Productivity (Scott A Ramseyer); Business Services (David H Confessore); Business Services (Ken Lim); Business Services Enterprise Business Solutions (David Wolozyn); Business Services and Demand Planning, APAC (Uli Kiefer); Business Support (Christian Schnabel); Business Support (Lisa Bartol); Business Support (Walter Aebersold); Business Technology (Boca) (Rob Klostermeyer); Business Technology (Jesse R Crew (Inherited)); Business Technology (Sharon Wong ); Business Technology (Sharon Wong ?????) (Sharon Wong ?????); Business Unit Director (Susan Snowball); Business Unit Manager (Natasha Hutchison); CAB & Digital Marketing Group (Narihiko Suenobu); CAD (Erwin Vonlanthen); CD Clinical Quality Control & Compliance (Larry Fish); CDS - Computerized Data & Systems (Christoph Kircher); CEQ Management (Barry Lynch); CI & QC Compliance (Lisa Marie Malcharek (Inherited)); CI & QC Compliance (Lisa Marie Malcharek); CI & QC Compliance (Thomas Wescombe); CI & QC Compliance (Thomas Wescombe) (Thomas Wescombe); CMC (Jason Newman); CMC Lead (Andrew White); CMC Lead (Dirk Bruns-Nagel); CMC Lead (Mackenzie Firer Sherwood); CMC Lead (Max Stuart Corbett); CMC Lead (Paul Smrdelj); CMC RA Group (Hideaki Hoshi ?? ?? - ?? ?????); CMC RA Group (Koichiro Kase - ); CMC Site Lead (Richard Buchta); CMO (Metani Rooms); CMO Management & Technology (Sabine Zollner); CMO Office (Vicki Oosterbaan); CO Diverse (Eddie Owens (Inherited)); CO Diverse (Edward Owens (Inherited)); CORE Operations - Canada (Constantina Boikos); CPAT (Kelly Helebrant); CPAT CVM - H&T (Uli Frevert); CPAT CVM / H&T (Uli Frevert); CPAT Internship Program I (Alissa Verone-Boyle (On Leave)); CPG Business Services (Michael Engelmann); CPG Center Berlin (Frank Bernert); CPG Center Bielefeld (Frank Bernert); CPG Center Braunschweig (Frank Bernert); CPG Center Bremen (Frank Bernert); CPG Center Gttingen (Frank Bernert); CPG Center G√∂ttingen (Frank Bernert); CPG Center Kiel (Frank Bernert); CPG Center Nrnberg (Frank Bernert); CPG Center N√ºrnberg (Frank Bernert); CPG Finance & Planning (Gerhard Mbus); CPG Finance & Planning (Gerhard M√∂bus); CPG Human Resources (Christine Debellis); CPG LQK Labor (Bettina Flotho-Salzmann); CPG Manager Serologisches Labor (Astrid Mather); CPG Medical Director (Kirsten Seidel); CPG Operations Management (Frank Bernert); CPG Planning (Achim Wagner); CPG Plasma Einkauf (Michael Engelmann (Inherited)); CPG Plasma Logistics (Klaus Rolshausen); CPG QA Case Processing Management (Ute Cherfan (Inherited)); CPG QA Center Operations (Ute Wessels); CPG QA Center Systems (Kerstin Kaddatz); CPG QA PLC Operations (Oliver Gro); CPG QA PLC Operations (Oliver Gro√ü); CPG QA Plasma Quality EU (Sascha Platt); CPG QA Plasma Quality QMB (Oliver Gro√ü (Inherited)); CPG QA Plasma Supplier Qualification (Ute Cherfan); CPG QA Plasma Supply Chain (Francesc Pont (Inherited)); CPG QA Plasma Supply Chain (Justin K Zajc); CPG QA Regulatory Affairs (Mandy Htzel); CPG QA Regulatory Affairs (Mandy H√∂tzel); CPG QA Supplier Qualification Management (Ute Cherfan (Inherited)); CPG QMB Center Operations (Ingrid Wallenwein (Inherited)); CPG QMB Center Operations (Ute Wessels (Inherited)); CPG Qualified Person (Margit El Azhari); CR&D Clinical Coagulation (Andres Brainsky); CRD Business Operations (Brian Dudt); CRD Business Operations (Walter Young); CRD Business Operations I (Craig Coffman (Inherited)); CRM Operations (Vinita Raina); CSL (Paul R Perreault); CSL 112 (Union) (Derek Butler); CSL Behring AG Bern (Martin Schaeren); CSL Behring AG Bern (Pierre Caloz); CSL Behring Broadmeadows (Martin Schaeren); CSL Behring Broadmeadows (Patricia Stewart); CSL Behring Broadmeadows II (Martin Schaeren); CSL Behring LLC Kankakee (Jose Gonzalez); CSL Behring LLC Kankakee (Patricia Stewart); CSL Behring LLC Kankakeee II (Patricia Stewart (Inherited)); CSL Behring Lengnau (Boris Lanoir); CSL Behring Marburg (Craig Shelanskey); CSL Behring Marburg (Michael Schrder); CSL Behring Marburg (Michael Schr√∂der); CSL Behring Pay Services (Susan M Walker); CSL Behring RCF Lengnau (Susanne Jecklin); CSL Behring RCF Lengnau II (Susanne Jecklin (Inherited)); CSL Behring Trademarks (Frank Schne-de la Nuez); CSL Behring Trademarks (Frank Sch√∂ne-de la Nuez); CSL Plasma (Craig Shelanskey); CSL Plasma (Michael F Deem); CSL Plasma (Willy Pardinas, Craig Shelanskey); CSL Plasma - Finance (Chris Shane); CSL Plasma - Finance (Christopher Shane); CSL Plasma / Engineering (Jrg Walz); CSL Plasma / Engineering (J√∂rg Walz); CSL Plasma GmbH (Berthold Ssser); CSL Plasma GmbH (Berthold S√ºsser); CSL Plasma GmbH HR (Berthold Ssser (Inherited)); CSL Plasma GmbH HR (Berthold S√ºsser (Inherited)); CSL Plasma II (Michael F Deem (Inherited)); CSL Plasma Kft. Hungary (Pankotai Tams); CSL Plasma Kft. Hungary (Pankotai Tam√°s); CSL Plasma US PLC Whitestown (Kristofor M Stauch); CSL Plasma US ¬ñ PLC Whitestown (Kristofor M Stauch); CSL Ruide Wuhan Manangement (David Chen ); CSL Ruide Wuhan Manangement (David Chen ?????); CSL Wuhan Plasma Operations (Jason Xu ?????); CSL Wuhan Ruide Calibration (Shangqu Shi ?????); CSL Wuhan Ruide Engineering (Jack Situ ); CSL Wuhan Ruide Engineering (Jack Situ ??????); CSL Wuhan Ruide Facility Team (Roger Peng ????); CSL112 Commercial Manufacturing Dept. (Derek Butler); CSR for Corporate (Patrick Castauro); CTS Business Operations (Robert Bredohl); CTS Cardiovascular (Eveline Girod-Engelhardt); CTS Hematology & Early Development (Annette Angell); CTS IRT (Amy Rupp); CTS Immunology (Berthold Roters); CTS Packaging & Labeling (Claudia Wieber); CTS Packaging & Labeling (Markus Thelen); CTS Process Improvement & Innovation (Carolin Sann); CTS Product Lead Cardiovascular (Elizabeth Bean); CTS Product Lead Immunology (Karin Knieke); CTS Product Lead Transplant (Fabienne Aschenbrenner); CTS Specialty Products & Transplant (Martin Mildenberger); CVC Cell, Virus & Compliance (Bjrn Keiner); CVC ¬ñ Cell, Virus & Compliance (Bj√∂rn Keiner); Calimmune Cell Manufacturing (Andreas Gille (Inherited)); Calimmune Cell Manufacturing (Bryan Burke); Calimmune Cell and Process Development (Jeffrey Ahlers); Calimmune Clinical (Maureen Boyd); Calimmune Clinical Programs (Mollie Barrett); Calimmune Information Technology (John Dallaire); Calimmune Quality Assurance (Anuja Prabhutendolkar); Calimmune Quality Assurance (Suparna Mishra Sarkar); Calimmune Research (Steven Lee); Calimmune Research and Development (Jeffrey Bartlett); Calumet Park 293 (Malissa Lichtenwalter); Calumet Park 293 QA (Michael W Solomon (Inherited)); Canada Medical Affairs Field Team (Maye Machnouk); Canton 236 (Jennie Marcum); Canton 236 ACM Area 1 (Ashley Instone); Canton 236 ACM Area 1 (Mirela Sekulic); Canton 236 ACM Area 2 (Rhianna Minger); Canton 236 ACM Area 2 (Rhianna Petrone); Canton 236 QA (Brandon Bosley); Canton 236 QA (Esence Hambrick); CapEx Procurement Lengnau (Franz Zweibrot [C]); CapEx Procurement Lengnau (Oliver Hahn); Capital Business Support (Tobias Pohle); Capital Controlling (Dirk Achenbach); Capital Controlling (Jrn Kaletsch); Capital Controlling (J√∂rn Kaletsch); Capital Project Management (Martina Thalmann); Capital Vendor Manager (Mark Vamadevan); Capital Vendor Manager (Nicholas Moody (Inherited)); Capital and MRO Sourcing - Kankakee (Emiliano Colon Segarra); Card Services (Linda K Nordmeyer); Cardio Therapies & Clinical Dev 2 (Lawrence Deckelbaum (Inherited)); Cardio Therapies & Clinical Development (Lawrence Deckelbaum); Cardiovascular & Diabetes (Susan Welsh (Inherited)); Cardiovascular & Diabetes (Todd Rudo); Cardiovascular & Metabolic Marketing (Rupal Shah); Cardiovascular & Metabolic Medical Affairs (Jeff McFadden (Inherited)); Cardiovascular & Metabolic TA (Jeff McFadden); Cardiovascular & Metabolism Therapeutic Area (Pierluigi Tricoci); Cardiovascular & Respiratory (James Peterson); Cardiovascular (Gail Berman); Cardiovascular (Lawrence Deckelbaum (Inherited)); Cardiovascular (Regina Clementi); Cardiovascular Global Marketing (Simon Fox); Cardiovascular and Metabolism (Danielle Duffy); Cardiovascular/Respiratory Therapeutic Area (Scott Hambaugh (Inherited)); Case Management GCSP (Nell Sborlini); Case Management MBR (Gudrun Heep); Category Chemicals, Filter Aid, Lab Chemicals (Martin Grossmann (Inherited)); Category Construction (Jos√© Maldonado (Inherited)); Category Equipment (Mike Gong); Category Gels, Resins, Media (BRN) (Martin Grossmann (Inherited)); Category Management (Markus Herrmann); Category Manager Indirects (Karl Lavery); Category Manager Indirects (Sarah Orchard); Category Packaging (Adam Kooloos); Cell Biology and Physiology (Cristina Gamell Fulla); Cell Culture & Purification (Michael Schmitt); Cell Culture & Purification Development (Andrew Low); Cell Culture & Purification Development (Ben Hunt); Cell Culture & Purification Development (Innocent Bekard); Cell Culture & Purification Development (Irene Baker); Cell Culture & Purification Development (Lou Fabri); Cell Culture & Purification Development (Simon Gerber); Cell Culture & Purification Development (Simon Stephan Gerber); Cell Culture & Purification Development (Vanessa Sandford); Cell Culture & Purification Development (Yih Yean Lee (Inherited)); Cell Culture & Purification Development (Yih Yean Lee); Cell Culture & Purification Development 1 (Innocent Bekard (Inherited)); Cell Culture Analytics (Vanessa Trefzer); Cell Manufacturing (Angel Jaramillo); Cell Manufacturing (Samuel O''Callaghan (On Leave)); Cell Manufacturing (Stefanie Homann); Cell Manufacturing I (Michelle Millington); Cell Manufacturing III (Samuel O''Callaghan); Cell Manufacturing IV (Stefanie Homann); Cell and Process Development (Jeffrey Ahlers); Cells, Virus & Compliance (Trudi Wentzel); Cells, Virus and Compliance (Tanya Guzzardi); Center Mitarbeiter (Andreas Gehrich (Inherited)); Center Mitarbeiter (Andreas Gehrich); Center Mitarbeiter (Annette Pernitzsch (Inherited)); Center Mitarbeiter (Annette Pernitzsch); Center Mitarbeiter (Claudia Habenicht (Inherited)); Center Mitarbeiter (Claudia Habenicht); Center Mitarbeiter (Damaris Kieckhfer); Center Mitarbeiter (Damaris Kieckh√∂fer); Center Mitarbeiter (Heike Borchert); Center Mitarbeiter (Kirsten Scheibel (Inherited)); Center Mitarbeiter (Kirsten Scheibel); Center Mitarbeiter (Natascha Bock (Inherited)); Center Mitarbeiter (Natascha Tappendorf); Center Mitarbeiter (Stephani Keltsch); Center Mitarbeiter (Sven Schuhmann (Inherited)); Center Mitarbeiter (Sven Schuhmann); Champaign 270 (Harriet Williams); Champaign 270 ACM Area 1 (Jacques LaRue); Champaign 270 ACM Area 2 (Harriet Williams (Inherited)); Champaign 270 ACM Area 2 (Quawan Dhom); Champaign 270 QA (Meghan Constantine); Change & Systems (Angela Leepin); Change & Systems (Lucia Mathis); Change Control Final Product Care (Stephan Nau); Change Management (Elizabeth Walker (Inherited)); Change Management (Kris Weidling (On Leave)); Change Management (Wendy Smith); Change Management Quality (Marlise Kuert Kolb); Change Management and Launch Support (QCM) (Marlise Kuert Kolb); Change Management-Document Control (Michelle Wells); Change and Investigations Primary Manufacturing (Jason Gilmour); Change and Investigations Secondary Manufacturing (Hai Tran); Characterization (Lars Robbel); Characterization 2 (Katharina Kramer); Charleston 044 (Lorenzo L Bowser); Charleston 044 (Robin M Bencivenga); Charleston 044 ACM Area 1 (Gregory Swant); Charleston 044 ACM Area 1 (Lorenzo L Bowser (Inherited)); Charleston 044 ACM Area 2 (Shakerrie Mobley); Charleston 044 QA (Yolanda L Carlisle); Charlotte 203 (Sam Kastanowski); Charlotte 203 (Shannon D Dalton); Charlotte 203 ACM Area 1 (Kathy Reilly); Charlotte 203 ACM Area 2 (Micah Ford); Charlotte 203 ACM Area 2 (Shannon D Dalton (Inherited)); Charlotte 203 QA (Nicole D Etheridge); Charlotte 418 (Paul Schmaldinst); Charlotte 418 ACM Area 1 (Sharita Swann); Charlotte 418 ACM Area 2 (Mayada M Omer); Charlotte 418 ACM Area 3 (Trina Crayton); Charlotte 418 QA (Le Tran); Chattanooga 010 (Ramoncito B Bautista); Chattanooga 010 ACM Area 1 (Sheree L Leatherman); Chattanooga 010 ACM Area 2 (Beth Simpson); Chattanooga 010 ACM Area 2 (Brittany Goforth); Chattanooga 010 QA (Callan Pierson); Chattanooga 010 QA (Heather Palladino); Chattanooga 010 QA (Patti Bailey (Inherited)); Chattanooga 010 QA (Patti Bailey (Inherited), Prim J Cunningham (Inherited)); Chattanooga 407 (Brian West); Chattanooga 407 (Brianna E Ballew); Chattanooga 407 ACM Area 1 (Amy D Hodge); Chattanooga 407 ACM Area 2 (Joshua Turpin); Chattanooga 407 QA (Barron Williamson); Cheektowaga 235 (Scott Bowers); Cheektowaga 235 ACM Area 1 (Cheryl Sousa); Cheektowaga 235 ACM Area 2 (Iryna Omelyan); Cheektowaga 235 QA (Roxanne Tyczka); Chem. Quality Control 1 (Lukas Dinger); Chem. Quality Control 2 (Silvan Stucki); Chemical Quality Control (CQC) (Adrian Zobrist); Chemical Analytics R&D (Lars Schiefelbein); Chemical Analytics R&D (Sara Stinca); Chemical Quality Control (Andreas Affolter); Chemical Quality Control (CQC) (Adrian Zobrist); Chemical Quality Control (Lars L√ºersen); Chemical Quality Control (Sten Strunze); Chemistry (Sara Garland); Chemistry (William Squires); Chemistry - In-Process Group (Courtney Nuccio); Chemistry - Raw Materials Group (Arthur F Fox); Chemistry Lab (Rebecca L Boudreau); Chiba Kanagawa Area (Madoka Yamamoto); Chibi Accounting (Hongyan Hu ?????); Chibi Admin (Hongyan Hu ); Chibi Admin (Hongyan Hu ?????); Chibi Clinical Inspection (Shiyong Yu ); Chibi Clinical Inspection (Shiyong Yu ?????); Chibi Plasma Collect (Liyun Huang ); Chibi Plasma Collect (Liyun Huang ?????); Chibi Plasma Collection (Jie Yu ); Chibi Plasma Collection (Jie Yu ????); Chibi Plasma Collection Center (Jun Lai ????); Chibi Plasma Collection Management (Jingyu Dong ?????); Chibi Plasma Sourcing (Jiaxin Long ?????); Chibi Plasma Sourcing Management (Bin Zhang ); Chibi Plasma Sourcing Management (Bin Zhang ????); Chicago 247 (Guillian T Gonzalez); Chicago 247 ACM Area 1 (Sabrina Flowers); Chicago 247 ACM Area 2 (Gretchen Watkins); Chicago 247 ACM Area 2 (Guillian T Gonzalez (Inherited)); Chicago 247 QA (Gretchen Watkins); Chicago 247 QA (Linda Schulthess); Chief Medical Office (Charmaine Gittleson); Chief Operating Officer (Paul McKenzie); Chief Safety Officer (Susan Welsh); China Logistics (Vickie Xian ); China Logistics (Vickie Xian ?????); China Marketing (Anlu Cai ?????); China Strategic Quality (Jian Fei ????); Christian Spuckti; Chromatography (Holger Blessing); Chromatography (Sven Karschnia); Chubu Area (Hiroyoshi Iwamoto); Chugoku Shikoku Area (Masahiko Ishida); Cincinnati 177 (Harold Tucker Jr); Cincinnati 177 ACM Area 1 (Anh Le); Cincinnati 177 ACM Area 2 (Darryl W Revere Jr); Cincinnati 177 ACM Area 2 (Jessica Hoffman); Cincinnati 177 QA (Christopher Thompson); Cincinnati 189 (Lee A Miles); Cincinnati 189 ACM Area 1 (Kristal Emmitt); Cincinnati 189 ACM Area 2 (Ginger Wells); Cincinnati 189 ACM Area 2 (Kaitlyn Spencer); Cincinnati 189 QA (Tyianna N Trice (On Leave)); Cincinnati 189 QA (Tyianna N Trice); Cinncinnati 177 (Harold Tucker Jr); Citrix (Thomas M Kent); Cleveland 401 (Sarah E Moss); Cleveland 401 ACM Area 1 (Shelly L Deimling); Cleveland 401 ACM Area 2 (Chonita Johnson (On Leave)); Cleveland 401 ACM Area 2 (Chonita Johnson); Cleveland 401 QA (Enetisha T Dailey); Cleveland 401 QA (Jennifer Longo); Clifton 255 (Andrew Oliver); Clifton 255 ACM Area 1 (Anthony Camuso); Clifton 255 ACM Area 2 (Marshaya Johnson); Clifton 255 ACM Area 2 (Rolshall Burnett); Clifton 255 QA (Kengie Jenkins); Clinic Study Acquired Bleeding-IG (Danielle Dalton); Clinic Study Acquired Bleeding/IG (Danielle Dalton); Clinical Bulk (Gerd Eisenmann); Clinical Bulk (Noemi Scholl); Clinical Bulk (Rene Bruegger (Inherited)); Clinical Compliance (Mihaela Carla Nosca); Clinical Compliance and Training (CC&T) (Saskia Ruis); Clinical Data Standards and Programming (Dieter Boss); Clinical Development (Wilfried Seifert); Clinical Development - Study (Christa Lewiski); Clinical Development - Transplant (Paul Shore); Clinical Development Operations (Craig Coffman); Clinical Development Operations (Deirdre BeVard); Clinical Development Operations (Kazuaki Hashimoto - ); Clinical Development Operations (Kazuaki Hashimoto ??? ?? - ???? ?????); Clinical Development Operations I (Craig Coffman); Clinical Development Respiratory (Lars Groenke); Clinical Disclosures & Transparency (Vicki Oosterbaan); Clinical Epidemiology (Quazi Ataher); Clinical Epidemiology (Susan Colilla); Clinical Epidemiology (Susan Welsh (Inherited)); Clinical Operations (David J. Parker); Clinical Operations (Thomas Verish); Clinical Operations (Valerie Reynaert); Clinical Operations 1 (Jennifer Weaver); Clinical Operations II (Valerie Reynaert); Clinical Operations Japan (Hideshiro Benz); Clinical Operations Serology (David Bibby); Clinical Operations Systems (Simone Dierkes) (Simone Dierkes); Clinical Ops 2 (Michael Giordani); Clinical Oversight Manager (Katja Ganter); Clinical Oversight Manager (Miriam Hochthurn); Clinical Oversight Manager (Stefanie Auer); Clinical Pharmacology & Early Development (Amy Harman); Clinical Pharmacology & Early Development (Stephen Caltabiano); Clinical Pharmacology & Translational Dev (John Roberts); Clinical Pharmacology & Translational Dev ¬ñ CPAT (Diana Lanchoney); Clinical Pharmacology & Translational Development CPAT (Diana Lanchoney); Clinical Pharmacology &Early Development (Diana Lanchoney (Inherited)); Clinical Pharmacology &Early Development (Dipti Pawaskar); Clinical Pharmacology &Early Development (Jagdev Sidhu); Clinical Pharmacology &Early Development (Joanne Ma); Clinical Pharmacology &Early Development (John Roberts); Clinical Pharmacology &Early Development (Michael Tortorici); Clinical Pharmacology (Bo Zheng); Clinical Procedural Documents &Standards (Thomas Verish (Inherited)); Clinical Programming (Stefan Hofmann); Clinical Programs (Christine Joch); Clinical Quality (Claire Pope); Clinical Quality (Karen Gard''ner (Inherited)); Clinical Quality Assurance (Daisy Maldonado-Ortiz); Clinical Quality Assurance (Joy Quinal); Clinical Quality Assurance (Pontaah Arbtan); Clinical Quality Assurance (Sharon Reinhard); Clinical Quality Assurance (Terrence Purnell); Clinical Quality Assurance (Volker Nickel); Clinical R&D (Hideto Akama - ); Clinical R&D (Hideto Akama ??? ?? - ??? ????); Clinical Research & Development - Transplant (Scott Adler); Clinical Safety (Corrinne Clement); Clinical Safety (Maria Mller); Clinical Safety (Maria M√ºller); Clinical Safety (Velma Hurst); Clinical Science (Eve Versage); Clinical Science (Naohito Sato); Clinical Sciences Transplantation (Christine Voigt ); Clinical Sciences Transplantation (Christine Voigt); Clinical Scientist (Andres Brainsky (Inherited)); Clinical Scientist (Jenny Mears); Clinical Serology Operations Lead (Frank Iozzo); Clinical Strategy&Development (Hideto Akama ??? ?? - ??? ????); Clinical Study (Agnieszka Turowska); Clinical Supply Chain Planning (Ulrich Mengel); Clinical Supply Quality (Carl Forte); Clinical Supply Quality (Matthew Wokulich); Clinical Trial Process Improvement & Innovation (Steve Walker); Clinical Trial Process Improvement & Innovation (Thomas Kelly); Clinical Trial Supply (Patrick McLaughlin); Clinical and TA Strategy (Steven Pascoe); Coag, Devices & Special Products (Ignacio Rodriguez); Coag, Devices & Special Products (Juergen Zorn); Coagulation & CC Sales Force (Emmanuelle Massonie (Inherited)); Coagulation & CC Sales Force (Jean-Vincent Viale); Coagulation & CC Sales Force (Thierry BERTHOULE); Coagulation & Oncology (Kristin Ingrassia); Coagulation & Oncology (Sylvia Herget); Coagulation & Oncology 1 (Kerstin Jung); Coagulation (Janine Dolan); Coagulation - CPP (Kristy Bandza (Inherited)); Coagulation Manufacturing (Kristy Bandza); Coagulation Manufacturing (Union) (Kristy Bandza (Inherited)); Coagulation Sales South (Marlene Gregory (On Leave)); Coagulation Sales South (Marlene Gregory); College Station 152 (Kandra K Blodgett); College Station 152 (Lauren Parks); College Station 152 (May Walker); College Station 152 ACM Area 1 (Kailey Stockton); College Station 152 ACM Area 2 (Amanda Miller); College Station 152 ACM Area 2 (DANIELLE GARRETT); College Station 152 QA (Kacie Goad); College Station 152 QA (May Nowalk); College Station 152 QA (May Walker); Colorado Springs 277 (Amanda M Cvitkovich); Colorado Springs 277 ACM Area 1 (Ben Huff); Colorado Springs 277 ACM Area 2 (Leon Clemons Jr.); Colorado Springs 277 ACM Area 2 (Sang Nguyen); Colorado Springs 277 QA (Crystal L Reichard); Columbia 217 (Catherine Watson); Columbia 217 (Monique Simpson); Columbia 217 ACM Area 1 (Mirna Rodriguez); Columbia 217 ACM Area 2 (Gregory Hines); Columbia 217 QA (Alissa Elke); Columbia 217 QA (Brandon Hoffman); Columbia 217 QA (Victoria McIntyre (Inherited)); Columbia 271 (Beth Brooks-Mccoy); Columbia 271 QA (Eric Mathis); Columbia 612 (Catherine Watson); Columbia 612 (Jose Pineda); Columbia 612 ACM Area 1 (Joyce A Jackson); Columbia 612 ACM Area 2 (Garrett Palmer); Columbia 612 QA (Aniashalla McDuffie); Columbia 612 QA (Shannon V Brown); Columbia 612 QA (Victoria McIntyre (Inherited)); Columbus 150 (Mark A Leach); Columbus 150 (Matthew Z Osborne); Columbus 150 ACM Area 1 (Nasha Ausberry); Columbus 150 ACM Area 2 (Alison L Woody); Columbus 150 QA (Tina M Miller); Columbus 409 (Angela L Funk); Columbus 409 ACM Area 1 (Jacob A Wilcox); Columbus 409 ACM Area 2 (Stella Shella May Oliver); Columbus 409 QA (Thomas U Anderson); Com Dev Immunology (GABRIELA ESPINOZA); Com Dev Immunology (Gabriela Espinoza); Com Dev Immunology (Karen MacPhail); Com Dev Immunology (Lawrence Bruck); Com Dev Medical (Birgit Voelker); Com Ops Human Resources Asia Pac (Jenny Zeng); Com Ops Human Resources Asia Pac (Trina Hendri (Inherited)); Com Ops Human Resources EU (Marc Htting); Com Ops Human Resources EU (Marc H√∂tting); Com Ops Human Resources ICO (Jenny Alexandra Kjaer); Com Ops Human Resources ICO (Jenny Kjaer Rotzler); Com Ops Human Resources NA (Elizabeth Wixted); ComDev Coagulation (Jens Oltrogge); ComDev Speciality Products (Georg Henkel); ComDev Speciality Products 1 (Georg Henkel); ComOps Argentina Accounting (Guadalupe Porro Greco); ComOps Argentina Finance (Silvina Lazzari); ComOps Argentina Marketing (Lucia I Grossi); ComOps Argentina Sales (Fernando Grosso); ComOps Brazil Finance (Marcelo Di Napoli); ComOps Brazil Market Access (Gerdeon Aurelio A Paiva); ComOps Brazil Marketing (Cristina Daniel Paulino); ComOps Brazil Operations (Cristina Junko Nakai); ComOps Brazil Regulatory Affairs (Rosana Batista); ComOps Brazil Sales (Luis Gustavo Gomes); ComOps Business Operations GE/AT/EEU (Karl Fox); ComOps Canada Coag & CC (MICHAEL LEO); ComOps Canada Finance (Michael McAllister); ComOps Canada Medical Affairs (MSL) (David Barnes (Inherited)); ComOps Canada Medical Affairs (MSL) (Debbie Bensen-Kennedy (Inherited)); ComOps Canada Regulatory Affairs (Vaneeta Bhatia); ComOps Canada Sales (MARIE-EVE JACQUES); ComOps Colombia Accounting (Carlos Andres Loaiza Barragn); ComOps Colombia Accounting (Carlos Andres Loaiza Barrag√°n); ComOps Colombia Sales (Martha Romano Gomez); ComOps Controlling GE/AT/Emerg. EU (Oliver Rosenthal); ComOps Demand Planning EU (Heike Kayser); ComOps Finance FP&A EU (Tamara Lissitsyna); ComOps Finance/Supply Chain/Compass EU (Heinz Berghoff); ComOps Government Reporting/Compliance (Mike Andrews (Inherited)); ComOps Intercontinental MBR (Bjrn Schfer); ComOps Intercontinental MBR (Bj√∂rn Sch√§fer); ComOps Intercontinental MBR (Manfred Nolte); ComOps Market Access (Stefan Neudrfer); ComOps Market Access (Stefan Neud√∂rfer); ComOps Marketing Coagulation (Dave Lewis); ComOps Marketing Coagulation (JD Kohutka); ComOps Marketing GE/AT/Emerg. EU (Elisabeth Averwerser); ComOps Meetings & Conventions (Molly Hess Knodel); ComOps Mexico Finance & Administration (Carlos Salas); ComOps Mexico Finance & Administration (HECTOR ESCOBEDO); ComOps Mexico Market Access (Humberto Maciel); ComOps Mexico Product Management (THALIA FIERRO DE LEON); ComOps Mexico Regulatory Affairs (Sandra Velasco); ComOps Mexico Sales (Jorge L Gastlum); ComOps Mexico Sales (Jorge L Gast√©lum); ComOps NA Business Operations (Denise Von Dohren); ComOps NA Business Operations (Mike Andrews (Inherited)); ComOps NA Government Reporting (Pamela Makosky); ComOps NA Government Reporting (Ronald Ritter Jr); ComOps NA Government Reporting (Sarah Palmer); ComOps NA Learning & Support Services (Lynn DiBonaventura); ComOps NA Learning & Support Services (Mike Andrews (Inherited)); ComOps NA Market Insights (Venkatesh Ramakrishnan); ComOps NA Marketing Consumer (Janet A Reimund); ComOps NA PRC Operations (Diane Wright); ComOps NA, Master Data (David Fling); ComOps NA, Sales Operations & CRM (Michael Price); ComOps NA, Sales Operations & Incentive Compensation (Michael Price); ComOps NA, Sales Operations (Jerry Burgener); ComOps North America Medical Affairs (Debbie Bensen-Kennedy); ComOps North America Medical Affairs (Judith Vensak); ComOps North America Medical Affairs- Immunology TA (Arie Katz); ComOps Reimbursement and Access (Dina Inverso); ComOps Reimbursements and Access (Jeffrey Lucero); ComOps Reimbursements and Access (Kate O''Connor-Masse); ComOps SP - Payers (Pete Dickson); ComOps SP / Payers (Pete Dickson (Inherited)); ComOps SP / Payers (Pete Dickson); ComOps Sales Germany (Michael Bernd Rode); ComOps Switzerland (Isabelle Dahinden); ComOps Therapeutic Area EU (Antti Kourula); ComOps Therapeutic Area EU (Damian Gilkerson); ComOps US Atlantic Specialty (Jeffrey Todd Winn); ComOps US Coag Midwest (Mark A Wiener); ComOps US Coag Northeast (Dominic Lattanzi); ComOps US Coag Northeast (Ivan Holtz (Inherited)); ComOps US Coag Sales (Ivan Holtz); ComOps US Coag South (Mark Fitzgerald); ComOps US Coag West (Scott Vollet); ComOps US Corporate Accounts (Paul Kurt); ComOps US Delaware Valley Specialty (Kellee Fearon); ComOps US Delaware Valley Specialty (Marlene Gregory); ComOps US Medical Affairs Coagulation (Jerry Powell (Inherited)); ComOps US Medical Affairs Coagulation (Vidhi Desai); ComOps US Medical Affairs Coagulation I (Vidhi Desai); ComOps US Medical Affairs Immunoglobulin (Ann Bullinger); ComOps US Medical Affairs Immunoglobulin (Ayman Kafal); ComOps US Medical Affairs Specialty (Laurel Omert); ComOps US Medical Affairs Specialty (Paolo Bajcic); ComOps US Medical Information (Deirdre Smith); ComOps US Mid-Atlantic Immunology (Lori Giampaolo); ComOps US Mid-Atlantic Immunology (Michael Flaherty); ComOps US Mid-South Immunology (Cory Baldwin); ComOps US Mid-South Immunology (James Heintz (On Leave)); ComOps US Mid-South Immunology (James Heintz); ComOps US Mid-South Specialty (Bill Stokes); ComOps US Mid-South Specialty (Brett Weathersby); ComOps US Mid-West Immunology (Mark C Morgan); ComOps US North Central Specialty (Steve A Mick); ComOps US Northeast Immunology (Pamela Buschbacher); ComOps US Northeast Specialty (Craig Strok); ComOps US Northeast Specialty (Gina Blair (Inherited)); ComOps US Northeast Specialty (Rebecca Riebe (On Leave)); ComOps US Ohio Valley Specialty (Jason Flowers); ComOps US South Central Immunology (Joseph Guinan (Inherited)); ComOps US South Central Immunology (Roxanne Quirin); ComOps US South Central Specialty (David Van Buskirk); ComOps US Southeast Immunology (James Gleichowski); ComOps US Southeast Specialty (Michael Allen); ComOps US Specialty Marketing (Bernadine Koziara (Inherited)); ComOps US Specialty Marketing (Tom Groeling); ComOps US Specialty Sales (Gina Blair); ComOps US Supply Chain (Madonna Jarrett); ComOps US West Central Immunology (Greg Logsdon); ComOps US West Central Specialty (Ann Andari); ComOps US West Central Specialty (Gina Blair (Inherited)); ComOps US West Central Specialty (Kimberly Kustron); ComOps US West Immunology (Greg Hansen); Combination-Device QA (TATYANA ILYINA); Combination/Device QA (TATYANA ILYINA); Comm Dev and Scientific Affairs (Edith Rosenberg); Comm Dev and Scientific Affairs (Tara Satyanand); Comm. Oper. GE/AT/Emerg. EU Diverse (Dirk Hoheisel (Inherited)); Commercial Operations Junxin (Shaolin Huang ?HuangShaolin?); Commercial (Brent MacGregor); Commercial (Stephen Allan); Commercial Access & Policy US (Marie Mazur (Inherited)); Commercial Access & Policy US (Shanthy Krishnarajah); Commercial Account Management - Public Health (Jeffrey Benton); Commercial Argentina (Gonzalo Pereira); Commercial Business Operations Sales & Analytics (Kevin Harkins); Commercial Business Operations Training (Patrick Gostomski); Commercial Business Services (Lynda Kulp); Commercial China (Cheng-Yen Tsai ); Commercial China (Cheng-Yen Tsai ?????); Commercial China (Harold Chan ?????); Commercial Contracts US (Yvonne Blom Hilsky); Commercial Contracts US (Yvonne Hilsky); Commercial Contracts-US (Agnes Goins); Commercial Customer Operations US (John Spencer); Commercial Customer Operations US, Customer Service, Account Mgmt (Teka-Ann Forrester); Commercial Customer Service/Supply Chain (Narelle Kinson); Commercial Development (Jane True); Commercial Development Global (Debbie Drane); Commercial Development and Policy (Dirk Ulrich Hofmann); Commercial Development and Policy (Lorna Meldrum); Commercial Development and Policy (Marie Mazur); Commercial Excellence (Makoto Miura); Commercial Excellence (Roger Melliger (Inherited)); Commercial Excellence (Tomohiro Miura - ); Commercial Excellence (Tomohiro Miura ??? ?? - ??? ?????); Commercial Excellence Training & Development Office (Chiho Muto); Commercial Excellence and Training (Cheng-Yen Tsai ????? (Inherited)); Commercial Excellence and Training (Joanne Liu ); Commercial Excellence and Training (Joanne Liu ????); Commercial Governance and Transparency (Artimis Ghassemi); Commercial Governance and Transparency (Daniel Quayle); Commercial IT solutions (Thomas Wilcock); Commercial Italy (Massimo Leoni); Commercial MEA (Camilla Shen (Inherited)); Commercial Marketing US (David Ross (Inherited)); Commercial Marketing US (Glenn Omanio); Commercial Marketing US (Tara Charvat); Commercial National Accounts US (Stefan Merlo); Commercial Operations (Brent MacGregor); Commercial Operations (Deniz Bagdadi); Commercial Operations (James Gaw); Commercial Operations (Mark Ridge); Commercial Operations (Sam Dowdle); Commercial Operations Argentina (Juan Pablo Guereo); Commercial Operations Argentina (Juan Pablo Guere√±o); Commercial Operations Australia/NZ (Loretta Croker); Commercial Operations Austria (Beate Pettinger-Natmenig); Commercial Operations Austria (Beate Pettinger-Natmessnig); Commercial Operations Austria (Beate Pettinger-Natme√ünig); Commercial Operations Austria (Martin Tenlen); Commercial Operations Benelux (Patrick Reygaert); Commercial Operations Brazil (Gustavo Fernandes); Commercial Operations Canada (Philippe Hebert); Commercial Operations Chile (Juan Pablo Ambar); Commercial Operations Colombia (Eduardo Cabas); Commercial Operations Colombia (Juan Feliu (Inherited)); Commercial Operations Czech Republic (JI KAPEREK); Commercial Operations Czech Republic (JI?√ç KA¬äPEREK); Commercial Operations Czech Republic (Ondrej Halasz); Commercial Operations Denmark (Gitte Stausholm); Commercial Operations Europe (Lutz Bonacker); Commercial Operations Finance (Adrienne Ford); Commercial Operations Finance (Amanda White); Commercial Operations Finance (Marcelo Estrella); Commercial Operations Finance (Michael Kochanski); Commercial Operations France (Franck Puget); Commercial Operations GE/AT/Emerg. EU (Dirk Hoheisel); Commercial Operations Global HR (Trina Hendri); Commercial Operations Greater China (Ben Yang ?????); Commercial Operations Greater China (Harold Chan ); Commercial Operations Greater China (Harold Chan ?????); Commercial Operations Greater China Junxin (Paul Li (Inherited)); Commercial Operations Greece (Marianna Konstantinidi); Commercial Operations Hong Kong (Roger Cheng ); Commercial Operations Hong Kong (Roger Cheng ?????); Commercial Operations Hungary (Lukacs Attila); Commercial Operations Intercontinental (Markus Staempfli); Commercial Operations Italy & Greece (Oliver Schmitt); Commercial Operations Italy, Business Operations (Giuseppe Fioravante); Commercial Operations Italy, Central Italy Sales (Claudio Chiorri); Commercial Operations Italy, North Italy Sales (Maurizio Gonizzi Barsanti); Commercial Operations Italy, South Italy Sales (Paolo Lombardi); Commercial Operations Japan (Jean-Marc Morange); Commercial Operations Junxin (Qiuhui Shi); Commercial Operations Mexico (Nicolas Martinez Gould); Commercial Operations Nordic (Martin Tenlen); Commercial Operations Nordic (Ulf Hultquist); Commercial Operations North America (Robert Lojewski); Commercial Operations Poland (Grazyna Debowska); Commercial Operations Poland (Marek Skotnicki); Commercial Operations Portugal (David Ventura); Commercial Operations SG, ML & ID (Matthew Ho); Commercial Operations Slovakia (Andrea Solivajsova); Commercial Operations Slovakia (JI KAPEREK); Commercial Operations Slovakia (Ondrej Halasz); Commercial Operations Spain & Portugal (Mara Jose Sanchez Losada); Commercial Operations Spain & Portugal (Mar√≠a Jose Sanchez Losada); Commercial Operations Turkey (Aysun Acer); Commercial Operations Turkey (Aysun Yanbol); Commercial Operations Turkey (Ercin Kugu); Commercial Operations Turkey 2 (Mehmet Aydogan); Commercial Operations United Kingdom (Eddie Owens); Commercial Operations United Kingdom (Edward Owens); Commercial Operations United Kingdom II (Dan Betts); Commercial Operations, Influenza Vaccines (Linda DU); Commercial Operations, Americas (Haejin Chung); Commercial Operations, Americas (Jane True); Commercial Operations, Canada (Gillian Stafford); Commercial Operations, DE, Customer Service (Thomas Kasimirat); Commercial Operations, DE/CH/AU (Deborah Di Salvo); Commercial Operations, DE/CH/AU (Frank Eberlein); Commercial Operations, EMEA (Enric Canelles Torres); Commercial Operations, Fleet, Meetings & Travel Strategic Sourcing (Kristie Boyd); Commercial Operations, Influenza Vaccines (Linda DU); Commercial Operations, International and Pandemic (Lorna Meldrum); Commercial Operations, Italy (Maura Cambiaggi); Commercial Operations, LatAm (Carmen Pereyra); Commercial Operations, LatAm (Carmen Rosa Pereyra Davila); Commercial Operations, Marketing UK (Kaush Gandhi); Commercial Operations, North Americas (David Ross); Commercial Operations, Spain (Antonio Lloret Parellada); Commercial Operations, UK (Deborah Di Salvo); Commercial Operations, UK (Helen Concilia); Commercial Ops (John Lawrence); Commercial Ops North America (John Fudala); Commercial Ops North America (Robert Lojewski (Inherited)); Commercial Pandemic Contracts (Randall Deck); Commercial Taiwan (Cheng-Yen Tsai ?????, King Lian Wang ?????); Commercial Taiwan (King Lian Wang ?????); Commercial Taiwan (Louis Liu ); Commercial Taiwan (Louis Liu ?????); Commercial, Business Operations UK (Charlotte Wrigley); Commercial, Business Operations, Customer Support (Jeff Wettlaufer); Commercial, Customer Service UK - Liverpool (Charlotte Wrigley); Commercial, Customer Service UK ¬ñ Maidenhead (Amy Smith); Commercial, Global Fluad (Richard Bland); Commercial, Global Flucelvax & Afluria (JESSICA O''DONNELL); Commercial, Global Flucelvax & Afluria (Jessica O''Donnell); Commercial, National Accounts Field US (Aaron Hubner); Commercial, National Accounts Field US (Aaron Martin Hubner); Commercial, National Accounts Field US-Summit (Gregg Quatrini); Commercial, National Accounts UK (Raashid Mehmood); Commerical, Product Planning & Innovation (Loddie Foose); Commission & Qualification (Arelis Cabezas); Communications (Maureen Powell); Communications (Melanie Kerin); Communications (Polina Miklush); Communications (Sandra Ruckstuhl); Company Secretary (Gregory Boss (Inherited)); Company Secretary Office (Sonya Curciev); Compass / gCRM System (Giorgio Lippi (Inherited)); Compass / gCRM System Benelux (Patrick Reygaert (Inherited)); Compass / gCRM System France (Pascale Ogel Le Guen (Inherited)); Compass Excellence Center (John Eric Bunn); Compensation Programs (Anthony Dillon); Compensation Programs (Timothy O''Donnell); Competitive Intelligence (Magdalena Popesco); Compliance & Improvement Manager (Elaine Feely); Compliance (Andrea Studer); Compliance (Cindy Rocknowski); Compliance (Jeffrey Zoubek (Inherited)); Compliance (Robin A Mroz); Compliance Americas (John Neff (Inherited)); Compliance Americas (Thomas Spittal); Compliance I (Margrit Waterval); Compliance II (Dagmar Riffel (Inherited)); Compliance II (Jutta Regenfelder); Compliance Management Engineering (Rainer Kutsch); Compliance Support 1 (Volker Gawantka); Computerized Data & Instruments Systems (C√©line Pires); Computerized Data & Instruments Systems (Hiroshi Nakano); Congress (Jean-Marc Morange (Inherited)); Congress (Kyota Yamaoka ??? ?? - ???? ????? (Inherited)); Connected Healthcare (Mark Ridge (Inherited)); Construction, Engineering & Qualification (Adam Robb); Construction, Engineering & Qualification (Mike Spencer); Construction, Engineering & Qualification (Richard Hayne); Construction, Engineering & Qualification 1 (Michael Ah-Cann); Construction, Engineering & Qualification 2 (Adam Robb (Inherited)); Construction, Engineering & Qualification 2 (Jim Li); Construction, Engineering & Qualification CSL112 (Jacqueline Murphy); Content Management (Christian Mohr); Content Management (Elisabeth Averwerser (Inherited)); Contract Administration (Frances Richardson); Contract Manufacturing (Ian Goldup); Contracts / Claims Management (Kevin Rathmell [C]); Controlling & Financial Reporting (RYAN HANSEN); Controlling (Wolfgang Thiel); Corporate & Expert Services (Patrick Haeberli); Corporate Affairs (Sharon McHale); Corporate Affairs - US (Polina Miklush); Corporate Affairs and Communications (Anthony Farina); Corporate Communications (Jemimah Brennan); Corporate Communications (Jemimah Pentland); Corporate Communications - Japan (Hidemi Akazawa); Corporate Communications Business Partnering (Melanie Kerin); Corporate Communications Business Partnering 2 (Melanie Kerin); Corporate Development (Serge Scrofani); Corporate Finance (John Levy); Corporate Finance (Paul Coulter); Corporate Finance Edge Controller (Julia Wilkinson [C]); Corporate Services (Marvin Anthony Edwards II); Corporate Services (Michael Hays (Inherited)); Corpus Christi 603 (Sam Schultz (Inherited)); Corpus Christi 603 (Sam Schultz); Corpus Christi 603 (Tashana K Sanders); Corpus Christi 603 ACM Area 1 (Lorena Luna); Corpus Christi 603 ACM Area 2 (Nola V Baker); Corpus Christi 603 QA (Tara L Spitzer); Cost Center Accounting & Sales Reporting (Patrick Eley); Cost Center Controlling (Rainer Althaus); Counsel Americas (Shawn Gibbs); Counsel Americas (Shawn Michael Gibbs); Counsel EMEA (John Minardo (Inherited)); Counsel EMEA (Martin Quinn); Counsel EMEA (Virginie Didier); Country & Region Management (Geeseung Yoo); Credit & Collection (Anette Rummelsberger); Credit Collection (Paul Fellingham (Inherited)); Critical Systems - HVAC (Union) (Jeff J Parks (Inherited)); Critical Systems - HVAC (Union) (Michael D Proctor); Critical Systems - Water Systems (Union) (Jeff J Parks (Inherited)); Critical Systems - Water Systems (Union) (Jim Meils); Critical Utilities (Frank Miles III); Critical Utility Projects (Jim Meils); Culture and HR Strategy (Linda Hagerty-Dotterer); Customer Care Center I (Christian Siebert); Customer Care Center II (Oliver Weck); Customer Engagement Management (Brian Johnson (On Leave)); Customer Engagement Management (Gina Malloy); Customer Service & Logistics (Massimo Leoni); Customer Service (Bernhard Czapla (Inherited)); Customer Service (Consuelo D''Amore); Customer Service (Crystal Marie Wiles); Customer Service (Crystal Wiles); Customer Service (Holger Milkereit (Inherited)); Customer Service (Michael Bernd Rode (Inherited)); Customer Service (Rainer Adam (Inherited)); Customer Service (Robert Rohs); Customer Service (Sandra Lafoca (Inherited)); Customer Service (Sean Grinnell); Customer Service (Susanne M√∂ller (Inherited)); Customer Service ComOps Intercontinental Region (Anita Erber); Customer Service Deutschland (Roger Melliger); Customer Service France (Charlotte Rougi (Inherited)); Customer Service France (Charlotte Rougi√© (Inherited)); Customer Service France (Julien Roche); Customer Service Manager (Anna Arena); Customer Service Ops (Sean Grinnell); Customer Service and Launchmanagement (Christoph Krug); Customer Service and Launchmanagement (Jochen Wagner); Customer Service und Logistik (Susanne Pfeiffer); Customer Services & Logistics (Barbara Kemp); CyberSecurity Operations (Daniel Pekol); CyberSecurity Operations (Edward Ferrara (Inherited)); Cytogam (Thomas Beer); DGL 1 (Advait Jagirdar); DOCI eClinical Technology (Thomas Verish (Inherited)); DS Manufacturing (Barbara Beugger); DS Manufacturing (Matthias Kaeser); DSP & Analytics (Michael Schmitt); DSP Engineering (Dave Tomsik [C]); DSP Engineering (Rene Boros); DSP Laboratories (Arnaud Vonarburg); DSP Manufacturing 1 (Matthias Kaeser); DSP Manufacturing 2 (Baptiste Leclerc); DTP & Graphical Control (Metin Yilmaz (Inherited)); Dallas 078 (Brenda C Greenfield (Inherited)); Dallas 078 (Elizabeth Casillas); Dallas 078 (Elizabeth Trejo); Dallas 078 ACM Area 1 (Rhonda Shields); Dallas 078 ACM Area 2 (Brenda C Greenfield (Inherited)); Dallas 078 ACM Area 2 (Melissa J Chapman); Dallas 078 QA (Carlotta McCoy); Dallas 078 QA (Wajeehah Al-Uqdah); Dallas 510 (Elizabeth Casillas); Dalton 296 (Brittany Goforth); Dalton 296 ACM Area 1 (Dana Hibbs); Dalton 296 ACM Area 2 (Annette L Switzer); Dalton 296 QA (Wayne J Bixler); Dangyang Clinical Inspection (Xingzuan Zhang ?????); Dangyang Inspect (Liuqing Wan ); Dangyang Inspect (Liuqing Wan ?????); Dangyang Inspection (Pingfan Zhang ?????); Dangyang Office Management (Wanwan Zhu ); Dangyang Office Management (Wanwan Zhu ?????); Dangyang Office Management (Xiaoquan Zhu ?????); Dangyang Plasma Collection (Yingshuang Li ); Dangyang Plasma Collection (Yingshuang Li ?????); Dangyang Plasma Collection Center (Jack Zhou ); Dangyang Plasma Collection Center (Jack Zhou ?????); Dangyang Plasma Collection Center (Qingqing Wang ?????); Dangyang Plasma Collection Management (Yaling Zhu ); Dangyang Plasma Collection Management (Yaling Zhu ?????); Dangyang Plasma Sourcing (Meng Hu ); Dangyang Plasma Sourcing (Meng Hu ????); Dangyang Plasma Sourcing Management (Xuejun Wang ); Dangyang Plasma Sourcing Management (Xuejun Wang ?????); Data Analytics & Metrics (Bill Bigney); Data Analytics (Aaron Imig); Data Analytics (Constanze Buchter); Data Analytics (John Choy); Data Analytics (Michael Schrder (Inherited)); Data Governance (STEPHEN SMITH); Data Management (Steven Carr); Data Management (Thomas Hahlgans); Data Management KOP 1 (Mara Strelecki); Data Management Operations (Charles Johnson); Data Operations&Clinical Infrastructure (Thomas Verish (Inherited)); Data Operations&Clinical Infrastructure (Thomas Verish); Data Services (Don Konemann); Data Services (Sachin Ohal); Data and Analytics (Enterprise Applications) (Robert Hawker); Data and Analytics Center of Excellence (Thomas Gsponer); Database (Bhavesh Patel); Davenport 424 (Greg Boden); Davenport 424 ACM Area 1 (Jacinda L Head); Davenport 424 ACM Area 2 (Tabathia Ann Dells); Davenport 424 QA (Christopher R Doerscher); Dayton 408 (Daniel K Osborne); Dayton 408 (Megan L Waldeck); Dayton 408 ACM Area 1 (Ashley Instone); Dayton 408 ACM Area 1 (Shalia Sloan); Dayton 408 ACM Area 2 (Ashley K McConnell); Dayton 408 QA (Daniel K Osborne); Dayton 408 QA (Megan L Waldeck); Decatur 104 (Antonia Geiselmayr); Decatur 104 ACM Area 1 (Binh Tang); Decatur 104 ACM Area 1 (Shauntia Cobb); Decatur 104 ACM Area 2 (Antonia Geiselmayr (Inherited)); Decatur 104 ACM Area 2 (Binh Tang); Decatur 104 QA (Amaris A Wiggins); Decatur 104 QA (China Washington); Decatur 104 QA (Kyle M Lehrke (Inherited)); Decatur 446 (Amber McCullough); Decatur 446 (Jordon Lyon); Decatur 446 (Sentoria D Leonard-Brown); Decatur 446 ACM Area 1 (Amber McCullough (Inherited)); Decatur 446 ACM Area 1 (Amber McCullough); Decatur 446 ACM Area 2 (Aja Marbley); Decatur 446 ACM Area 2 (Amber McCullough (Inherited)); Decatur 446 QA (Tony D Giebelstein Jr); Delivery Support (Christopher A Betterton); Delivery Support (Robert Boland (Inherited)); Demand Planner (Rose Cimbora); Demand Planning (Ann Cipkins); Demand Planning (Tsutomu Nagoya ???? ? - ??? ????); Dept 1216 Antivenom Manufacture (Andrew Penhale); Dept 1216 Antivenom Manufacture (Cassandra Smoult); Dept 822, Cell Culture and Purification (Jamie Black); Development & chem. Quality Control (Daniel Frattini); Development Applications (Andy Chung); Development GMP Laboratory (DGL) (Andreas Meister (Inherited)); Development GMP Laboratory (DGL) (Heike Gocht); Development Projects (Heather Davis (Inherited)); Development and Support (Johannes Schiebel); Development and Support (Stefan Schmidbauer); Digital Communications (Mairian Gildea); Digital Communications (Piers Dickinson); Digital Delivery & Data (Robert Boland); Digital Health (Brian Johnson); Digital Strategy Implementation (David Christie); Digital Workplace (Dana Leeson); Dir Com Op - Vaccines (Helen Concilia); Dir, Health Economics 724 (Stuart Harsley); Direct Procurement (Angelia Crumbley); Director Clinical Science (Janine Oberije); Director Comm Ops - Pharma (Danielle Dowell); Director HR 924 (Yvette Saunders); Director QA Continuous Improvement & Issues Management (Adrian Meade); Director Quality Control (Leonora Pancho); Director Supply Chain (Lachlan Cruise); Director of Engineering (Brian Price); Director of Manufacturing, Products of National Significance (Cassandra Smoult); Director of Manufacturing, Products of National Significance (Lisa Lamb); Director, Commercial Operations NZ (Catherine Murphy); Director, Marketing (Rachel Jensen); Director, Marketing (Theo Horafiaris); Director, Program Execution (Gail Dawson); Dispatch (Bernd Schfer); Dispatch (Bernd Sch√§fer); Dispatch (Igor Kaucher (Inherited)); Dispensing, Medium & Buffer Preparation (Vreni Frtsch); Dispensing, Medium & Buffer Preparation (Vreni F√∂rtsch); Distribution (Jasmine Ma ?????) (Jasmine Ma ?????); Distribution (John Conway); Distribution (Maggie Wan ); Distribution (Maggie Wan ?????); Distribution (Nan Wang ); Distribution - Central Region (Lu Jin ); Distribution - Central Region (Lu Jin ????); Distribution - DTP, China (Cissy Xi ????); Distribution - East Region (Zhan-jun Liu ); Distribution - East Region (Zhan-jun Liu ?????); Distribution - North Region (Feng Rui ); Distribution - North Region (Feng Rui ????); Distribution - North Region (Kaijian Zhao ?????); Distribution - South Region (Nan Wang ????); Distribution - South Region (Sunny Sun ); Distribution - South Region (Sunny Sun ?????); Distribution - Tendering (Yanfang Zhou ?????); Distribution - West Region (Xuemei Zeng ); Distribution - West Region (Xuemei Zeng ?????); Distribution CH U8 (Rafael Gasser); Distribution CH U8 (Thomas Ryser (Inherited)); Distribution Junxin (Yanfang Zhou ); Distribution Junxin (Yanfang Zhou ?????); District Heights 210 (Cecelia Cutchin); District Heights 210 QA (ALISON CONLEY); District Heights 210 (Cecelia Cutchin); District Heights 210 (Michael W Solomon); District Heights 210 ACM Area 1 (Mickey Wilson); District Heights 210 ACM Area 1 (Tamika Hogan); District Heights 210 ACM Area 2 (Tamika Hogan); District Heights 210 QA (ALISON CONLEY); District Heights 210 QA (Abigail Brown-Delostrinos); Documentation (Arno Karnholz (Inherited)); Documentation (Dominik Erhart); Documentation Management and GMP Training (Jin Tao ); Documentation Management and GMP Training (Vicky Fang ????); Documentation Starting Materials (Angelika Jung); Documentation Starting Materials (Simone Lang); Dothan 504 (Demetia Scott); Dothan 504 ACM Area 1 (Olivia McVey); Dothan 504 ACM Area 2 (Kaitlyn M Delamore); Dothan 504 QA (Roxanne K Schaeffer); Douglas 190 (Alejandra Gonzalez); Douglas 190 (Jose Pineda); Douglas 190 ACM Area 1 (Irma Ornelas); Douglas 190 ACM Area 2 (Alejandra Gonzalez); Douglas 190 ACM Area 2 (Marisela Nunez); Douglas 190 QA (Andrew Larson); Downstream Manufacturing (Alan Cartwright); Downstream Manufacturing Days (Alan Hudson); Downstream Manufacturing Engineering (Anthony Flynn); Downstream Manufacturing Shift 1 (Neil Myerscough); Downstream Manufacturing Shift 2 (Edward Bucknall); Downstream Manufacturing Shift 3 (Alan Hudson); Downstream Manufacturing Shift 3 (Neil Myerscough); Downstream Manufacturing Shift 4 (Craig Ridyard); Downstream Manufacturing Shift 4 (Edward Bucknall); Drawing Office (Andrew Brown); Drug Product (Nicola Di Maiuta); Duluth 613 (Dennis J Lofald); Duluth 613 (Veronica J Kaspszak); Duluth 613 ACM Area 1 (Jenn Jackson); Duluth 613 ACM Area 2 (Angela J O''Hara); Duluth 613 QA (Heidi E Woolhouse); Durham 074 (Thomas Kisicki Jr); Durham 074 ACM Area 1 (Keonna Austin); Durham 074 ACM Area 2 (Damonta A Burch); Durham 074 QA (Meia Moore); E&I with MES/Systems (Josh Mills); E-Support (Marco Grossniklaus); E2E Operations Finance (Marcelo Estrella); ECI Finance/Controlling (Salim Ketata); EEMEA Finance/Controlling (Amanda White); EHS & Business Resilience (Lynette Hodgden); EHS (Liam Ryan); EHS Bern (Lone Carlsen); EHS Bern (Rolf Ingold); EHS Bern (Ulrich Schuerch); EHS Design Construction & Process Safety (Aaron Duff); EHS Kankakee (Dale C Rosene); EHS Kankakee (Lynette Hodgden (Inherited)); EHS Kankakee 2 (Allan Wise); EHS Kankakee 2 (Andrew Uftring); EHS Marburg (J√ºrgen Kanand (Inherited)); EHS Marburg (zcan Campinar); EHS Marburg (√ñzcan Campinar); EHS Plasma (BARBARA WUNDER); EHS Security (Adam Kennell); EHSS Lengnau (Harry Hohl); ELS A (Alain Ducaud); ELS A (Susanne Heins); ELS Z (Peter Reusser); ELS Z (Simon Haenni); EM (Tina Liu); EMEA (Anja Brunlich); EMEA (Anja Br√§unlich); EMEA HR Ops Marburg Team (Inga Menzinger-Koradin); EMEA Service Delivery (Cornelia Huber); EMEA Service Desk (Filipe Cabete); EMEA Service Operations (Bernd Boucsein); EMEA Site Services (Raluca Hodgson); EMR GH Gruppe; EMR HW Gruppe (Patrick Laukel); EMR NBF Gruppe (Thomas Peil); ERP Applications (Mourad Boulanouar); ERP Applications (Nagesh Ramesh); ERP Solution (Rajan Thomas); ERP Solution (Rajan Thomas) (Rajan Thomas); ERP Solution Center (KoP) (Peter Eliasson); ERP Solution Center (MBR) (Jochen Preis); ERP Solution Center (Neelesh Kulkarni); ERP Solution Centre (AU) (Shama Ravindran); ERP and Enterprise Applications (Steven Harvey); ES Qualification (Michael Kocher); ETA (Colin Steele); ETA (Ian Mackay); ETA (Tim Bullock (Inherited)); ETA + Utilities & Services (Michael Elmer); EU Qualified Person for PhV (Andrew Bond); EU Qualified Person for PhV (Frank Mauler); EU TA Coagulation (Bianca Petzold); EU TA Coagulation (Damian Gilkerson); EU TA Coagulation (Sinem Kaba Pasqualon); EU Therapeutic Area Immunology & Neurology Europe (Peter K Tadros); EU-QPPV Office Lead (Gudrun Dechert); EU/ROW RA Franchise Cell and aQIV (Susan Cameron-Laxton (Inherited)); Early DSP Development (Michael Schmitt (Inherited)); Early Stage DSP Development (EDD) (Lars Robbel); Early Stage DSP Development (EDD) (Michael Schmitt); Early Stage DSP Development (Olga M√ºller); Early USP Development (Jrg Gnther); Early USP Development (J√∂rg G√ºnther); Early USP Development (Stefan Debus); Early Upstream Development (Emmanuel Bizier); Early Upstream Development 1 (Ian Walker); Early Upstream Development 2 (Ellen Otte); East Point 193 (Kimberly Bragg); East Point 193 (William A Voltz); East Point 193 ACM Area 1 (Marshaya Johnson); East Point 193 ACM Area 1 (ROSALIND MCCOY); East Point 193 ACM Area 2 (Latasha A Wech); East Point 193 QA (Amaris A Wiggins); East Point 193 QA (Danelle Jones); East Point 193 QA (Melodee C Ebel (Inherited)); East Providence 202 (Christopher Travalik (Inherited)); East Providence 202 (PAUL BURKE); East Providence 202 (Sean Delong); East Providence 202 ACM Area 1 (Jacqueline Levasseur); East Providence 202 ACM Area 2 (Christine Riebe); East Providence 202 QA (Desiree Guerrero); East Providence 202 QA (Tatyani Guest); Eau Claire 514 (Kayla L Stueber); Eau Claire 514 QA (Melissa K Latourelle); Edge Commercial (Darren Hawker); Edge Finance (Matthew Rees (Inherited)); Edge Financial Accounting (Barry Johnston); Edge Manufacturing (Andrew Penhale); Edge Parkville (Matthew Rees); Edge Parkville - GLIMS (Helen Mihaljevic); Edge Planning (Brent Gorham); Edge Procurement (Mark Van Der Poel); Edge Programme (Ian Dick); Edge Programme (Philip Troughton); Edge Quality (Kate Waterman); Educational Meetings (Marco Kuhn); El Paso 197 (ALEX MARIN); El Paso 197 (Heather Jex); El Paso 197 ACM Area 1 (Michael Garcia); El Paso 197 ACM Area 2 (Cynthia Marquez); El Paso 197 QA (Amanda Robles); El Paso 197 QA (Brenda C Greenfield (Inherited)); El Paso 197 QA (CATIA LOPEZ); El Paso 248 (Edgar Rodriguez); El Paso 248 ACM Area 1 (Manuel Jaramillo); El Paso 248 ACM Area 2 (Albert Lozano); El Paso 248 QA (NOHEMI GARCIA); El Paso 279 (Alejandro Perales); El Paso 279 ACM Area 1 (Crystal Ramirez); El Paso 279 ACM Area 2 (Vanessa Pena); El Paso 279 QA (Kenya Villarreal); Electrical / I&C / BMS (Jan Klee); Electrical Engineer (Tien Nguyen); Electrical Maintenance (Marcel Ziegler); Electrical Maintenance (Vittorio D''Argento (Inherited)); Electro Maintenance (Simon Urfer); Electronic Laboratory Systems (ELS) (Susanne Heins); Electrophoresis and Immunoassays (Michael Albers); Elektroniker fr Automatisierungstechnik (Carmen Walldorf (Inherited)); Elektroniker f√ºr Automatisierungstechnik (Doris Nake (Inherited)); Elternzeit Bielefeld (Kirsten Scheibel (Inherited)); Elternzeit Bielefeld (Kirsten Scheibel); Elternzeit Diverse (Andreas Gehrich (Inherited)); Elternzeit Diverse (Andreas Gehrich); Elternzeit Diverse (Annette Pernitzsch (Inherited)); Elternzeit Diverse (Annette Pernitzsch); Elternzeit Diverse (Claudia Habenicht (Inherited)); Elternzeit Diverse (Claudia Habenicht); Elternzeit Diverse (Damaris Kieckh√∂fer); Elternzeit Diverse (Stephani Keltsch); Elternzeit Diverse (Sven Schuhmann (Inherited)); Elternzeit Diverse (Sven Schuhmann); Elyria 165 (Karin M Rothig); Elyria 165 ACM Area 1 (Nathan G Dailey); Elyria 165 ACM Area 2 (Gabrielle N Scalese); Elyria 165 QA (Calvin Juguilon); Elyria 165 QA (Jason A Skonecki); Emerging Europe (Christian Wieszner); Emerging Europe (Dirk Hoheisel (Inherited)); Employee Relations (Bonnie Shor (Inherited)); Employee Relations (Bonnie Slone (Inherited)); Employee Relations 1 (Tricia N Jackson); Employee Relations 2 (Jan Cameron); Employee Relations 3 (Emmanuella Hedge); End User Services (Christian Reinhardt); End User Services (Rolf Tr√ºmper); Endwell 220 (Barbara Ferrese); Endwell 220 ACM Area 1 (Richard Barber); Endwell 220 ACM Area 2 (Barbara Ferrese (Inherited)); Endwell 220 ACM Area 2 (Tara Streeter); Endwell 220 QA (Aarsalaan Semna); Endwell 220 QA (Richard Purdy II); Energy Management (Anna Fellenberg); Energy/WAD (Sandro Jenzer); Energy/WAD 1 (Michael Hirschi); Eng Business & Systems Mgr (Nicholas Moody); Eng Services - Ops (Mark Mansour); Eng Services -Ops (Rohit Dhorje); Eng Services-Ops (Damien Barri (Inherited)); Eng Services-Ops (Michael Spiteri); Eng Services-Ops (Victor Karafilis); Engineering (Bulk) (Jeff Rehborg); Engineering (Bozana Dujak); Engineering (Bulk) (Jeff Rehborg); Engineering (Controls) (Dennis Prom); Engineering (Controls) (Scott Bilkey); Engineering (Controls) I (Dennis Prom); Engineering (Howard Wilton); Engineering (Johannes Krmer); Engineering (Johannes Kr√§mer); Engineering (My Linh Ly); Engineering (Qualification) (Jeff Mihaichuk (Inherited)); Engineering (Qualification) (Matthew Galley); Engineering (Rainer Kraus); Engineering (Richard Friar); Engineering Compliance (Connie Costanzo); Engineering Data Management (Susan Clough); Engineering Lengnau (Olaf Thiel); Engineering Liverpool (Kevin Ridley); Engineering MAB/Gene Therapy (David Glover); Engineering Maintenance Mgr (Andrzej Wawrzykowski); Engineering Production Manager (Karen Spencer); Engineering Production Manager (Mark Davide); Engineering Production Manager (Stuart Barnes); Engineering Projects Dept (Stuart Freeland-Small); Engineering Projects Manager (Anthony Wrzesinski (Inherited)); Engineering Projects Manager (Anthony Wrzesinski); Engineering Projects Manager (David Ryan); Engineering Serv (Narein Mather); Engineering Serv (Sudhir Kamath); Engineering Serv - Drawing Office (Peter Dubuisson-Perrine); Engineering Serv - Maintenance (Michael Bell); Engineering Serv - Maintenance (Shiran Fernando); Engineering Serv - Management (Crystal Penaluna); Engineering Serv - Management (Geoff Armstrong); Engineering Serv Plan &Support (Benjamin Terbeeke); Engineering Serv Plan &Support (Deepak Cherian); Engineering Serv Plan &Support (Satya Dara (Inherited)); Engineering Serv Plan &Support (Satya Dara); Engineering Services Ops (Jarvis Walker); Engineering Services (Arnold Nigsch); Engineering Services (Atul Malhotra); Engineering Services (Bradley J Eberhart); Engineering Services (Daniel Reveron); Engineering Services (Franz Arnold Nigsch); Engineering Services (James E Viane Jr); Engineering Services (Jose Gonzalez (Inherited)); Engineering Services (Peter Szitas); Engineering Services (Victor Marinelli); Engineering Services - Maintenance (Matthew Johnson); Engineering Services - Maintenance E/I (Jason Fletcher (Inherited)); Engineering Services - Maintenance E/I (Jason Fletcher); Engineering Services - Maintenance E/I (Matt Downey); Engineering Services - Maintenance E/I 1 (Samuel Kanyongo); Engineering Services - Maintenance E/I 2 (Ronnie Mercieca); Engineering Services - Maintenance E/I 3 (Ben Hillman); Engineering Services - Maintenance E/I 4 (ANDREW Rawlinson); Engineering Services - Maintenance E/I 5 (Haisley Okpako); Engineering Services - Maintenance E/I 6 (Jim Haines); Engineering Services - Ops (Amanda Sim); Engineering Services - Ops (Jason Chan); Engineering Services - Ops (Lee Dengler); Engineering Services 3 (Gene Bohn); Engineering Services I (Daniel Reveron); Engineering Services Process Leader (Tim Bullock); Engineering Services ¬ñ Ops (Jarvis Walker); Engineering Standards (Adam Dragolic); Engineering Support (Crystal Penaluna (Inherited)); Engineering Support (Crystal Penaluna); Engineering Support (Geoff Armstrong (Inherited)); Engineering Support (Jayne Crowley); Engineering Technology Transfer (Shannon Boudreau); Engineering, PKV (Anthony Wrzesinski); Engineering, PKV (Brian Price); Engineering/PJM (Roger Stoffel); Engineering/PJM (Sven Schwerdtfeger); Enshi Inspection (Binming Tian ?????); Enshi Plasma (Xiaoxing Jiang ); Enshi Plasma (Xiaoxing Jiang ?????); Enshi Plasma Collection Center (Genxiong Zhou ?????); Enshi Plasma Collection and Office Administration (Min Zhang ); Enshi Plasma Collection and Office Administration (Min Zhang ????); Enshi Plasma Operations (Jing Wang ); Enshi Plasma Operations (Jing Wang ????); Enshi Plasma Sourcing (Liu Yang ????); Enshi Quality Control (Xiaohong Tan ?????); Enshi Quality Control Management (Stevin Cheng ); Enshi Quality Control Management (Stevin Cheng ?????); Enshi Quality Control Management (Xiaoping Tang ?????); Enshi Quality Inspection (Yinglong Liu ); Enshi Quality Inspection (Yinglong Liu ?????); Enshi Supply Management (Hong Yuan ); Enshi Supply Management (Hong Yuan ????); Enterprise Analytics (John Thompson); Enterprise Applications (Charles Lowe); Enterprise Applications (David Christie); Enterprise Applications (Martin Jones (Inherited)); Enterprise Architecture (Ian Wilson); Enterprise Architecture (Keith Walbert); Enterprise Architecture (Movi Banerjee); Enterprise Business Solutions (David Wolozyn); Enterprise Data Management (Matt Barnett); Enterprise Excellence (Andrew Croft); Enterprise Execution Systems (Frank Mastellone); Enterprise Infrastructure & Operations (Bernard Shepard); Enterprise Infrastructure & Operations (Don Konemann (Inherited)); Enterprise Infrastructure & Operations (Greg Misyak); Enterprise Investment Portfolio Management (Aymeric Ange); Enterprise Learning Management (Justin Huss); Enterprise Portfolio & Governance (Matthew Cam (Inherited)); Enterprise Portfolio & Governance (Matthew Cam); Enterprise Portfolio & Governance (Matthew Kokkonen); Enterprise Process Management (Desire Djomani); Enterprise Process Management (Linda Carducci (Inherited)); Enterprise Process Management (Matthias Kienast); Enterprise Security & Risk (Edward Ferrara); Enterprise Security & Risk EMEA (Jrg Koch); Enterprise Security & Risk EMEA (J√∂rg Koch); Enterprise Security - Identity and Access Management (Rebecca Daniels); Enterprise Security - Identity and Access Mgmt (Rebecca Daniels); Enterprise Security Architecture (Wilfried Ziegler); Enterprise Site Management (AU/Asia) (Don Konemann (Inherited)); Enterprise Site Management (AU/Asia) (Viv Louzado); Enterprise Site Management (Andr√© Strahm); Enterprise Site Management (Michael Furchert); Enterprise Site Management MBR (Michael Furchert); Environment (Barbara Dix); Environment Health Safety Sustainability (Dale C Rosene); Environment Health Safety Sustainability (Lynette Hodgden (Inherited)); Environmental Health & Safety (Andrew Hanley); Environmental Health & Safety (David Nelson); Environmental Health & Safety (David Stewart); Environmental Health & Safety (Filanthy Nalpantidis); Environmental Health & Safety (Prue McKeown); Europe Global Reg Affairs (Hazel-Anne Griffiths); Europe HR Ops Tier 1 (Katrin Schpbach); Europe HR Ops Tier 1 (Katrin Sch√ºpbach); Europe HR Ops Tier 1 (Stephan Sch√§ufele (Inherited)); Europe HR Ops Tier 1 (Sylvia Potocnik); European Sourcing Packaging (Matthias Engler); Evansville 614 (Coltin L Springate); Evansville 614 (Michelle S DeCambre); Evansville 614 (Scott Ward); Evansville 614 ACM Area 1 (Tani Baugher); Evansville 614 ACM Area 2 (Ian C Fox); Evansville 614 QA (Karla K Cooper); Execution Systems (Matt Casey); Executive Admin (Rupal Pandit); Executive Assistant & Travel Office (Eliane Bossart); Executive Assistant (Joanne Du); Executive Assistant (Sarah Gleeson); Executive Compensation & Equity (Micaela Costello); Experimental Unit (Felix Hiltwein); Export Admin LatAm (Cindy Jacobs); External Communications (Natalie de Vane); External Materials (Dominik Corbet); External Supply Integration (Cameron Barrett); External Supply Quality Assurance (Eva Streit); External processes (Andr Strahm); External processes (Andr√© Strahm); External processes (Simon Haemmerli); F IX, F II, Inhibitors (Carlotta Debnar-Daumler); F VIII & F IX (Barbara Kalina (Inherited)); F VIII & F IX (Horst Boeder); FBP Project and Portfolio Support (Ritesh Kumar); FP - Non Process Engineering and Construction Management (Jennifer Mastio); FP - Non Process Engineering and Construction Management (Rolf Mnig); Fachlageristen (Carmen Walldorf (Inherited)); Fachlageristen (Doris Nake (Inherited)); Facilities & Utilities (Adila Zaidi); Facilities & Utilities (Bradley J Eberhart (Inherited)); Facilities (Claire Behforooz); Facilities Develop & Services (Barbara Anderton); Facilities Develop & Services (Cameron Cook); Facilities, Capex and Drawing (Mark Hughes); Facilities, Utilities & Services (Michael Elmer); Facility & Waste Management (Michael Andrey); Facility & Workspace Management (Samuel Maurer); Facility Compliance Specialist (Andrew Stalder); Facility Management (Christian Daum); Facility Management (Hansjoerg Bettschen); Facility Management (Hanspeter Bruni); Facility Management (Michael Andrey); Facility Project Management - Non Process Engineering and Construction (Rolf M√∂nig); Facility Project Management - Process Engineering (Darren Vegara (Inherited)); Facility Project Quality Management (Brian Grimson [C]); Facility Project Quality Management (Graham William Telford); Facility Services (Alex Elandt [C]); Facility Services (Alex St√§hli); Facility Services (Sabine Beck); Facility Services (Samuel Maurer); Facility, Data & Laboratory Management (Robert Schwanzer); Faktor X Produkte (Carlotta Debnar-Daumler); Fayetteville 266 (Grant Strayer); Fayetteville 266 ACM Area 1 (Kady-Ann Foster); Fayetteville 266 ACM Area 2 (Joshua Simpson); Fayetteville QA 266 (Daniel Huereca); Federal Way 110 (Tamara Ann Owens); Federal Way 110 ACM Area 1 (Nancy Martinez); Federal Way 110 ACM Area 1 (Tamara Ann Owens); Federal Way 110 ACM Area 2 (Tamara Ann Owens (Inherited)); Federal Way 110 ACM Area 2 (Tiffani Brazzell); Federal Way 110 QA (Jenny Bardwell); Federal Way 110 QA (Simon P Dickinson); Fermentation Research (Thomas Rein); Fertigungselektronik (Michael Kraft); Fertigungselektronik 1.0 (Thomas Grn-Fischer); Fertigungselektronik 1.0 (Thomas Gr√ºn-Fischer); Fertigungselektronik 2.0 (Ralf Gerlach); Fertigungselektronik 2.1 (Ralf Gerlach); Field Sales (Angus Gordon); Field Sales (Catherine Murphy (Inherited)); Field Sales (Kim Fry); Field Services BRN & LGN (Urs Derungs); Field Support (Brett A Wintheiser); Field Support (Brett Wintheiser); Field Support (Robin G Palmer (On Leave)); Field Support (Robin G Palmer); Fill - Finish (Pasquale Carestia); Fill / Finish (Pasquale Carestia); Fill / Finish Manufacturing (David Hartley); Fill /Finish Process Improv Manager (Joseph Connor); Fill Area (Barbara Beugger); Fill Area (Nicola Di Maiuta); Fill Finish (Beat Allemann); Fill Finish (Nicola Di Maiuta); Fill Finish (Shannon Thorp); Fill Finish Marburg (Frank Emmerich); Fill Finish Marburg (Helmut Robert Euler); Fill Finish Operations (Lasher Rao ); Fill Finish Operations (Lasher Rao ?????); Fill Finish Operations (Philip Troughton); Fill Process Technology (Herman Schinkelshoek); Fill and Finish Support (Melissa Addamo); Fill finish (Ian Middleton); Fill finish (John Riley); Fill finish (Marion Taligault Owens); Fill/Finish Formulation (Caterina Colantoni); Fill/Finish Formulation (Norm Mancuso (Inherited)); Filling & Packaging Mechanical Team (Adam Steegstra); Filling & Packaging Mechanical Team (Tharanga Abeysinghe); Filling & Visual Inspection (Eveline Kindler); Filling (Adrian Carni); Filling (Andreas Gavriel (Inherited)); Filling (Andreas Gavriel); Filling (Andrew Marshall (Inherited)); Filling (Andrew Marshall); Filling (Cuong Nguyen); Filling (Daniel Locandro); Filling (Igor Belevski); Filling (Joselito Bautista); Filling (Marion Taligault Owens); Filling H69 & Refludan (Matthias Klein); Filling I (Eveline Kindler); Filling II (Celio Ferreira); Filling II (Simone Wettstein); Filling Line & Lyophilisation (Michael Gisler); Filling Line I & II (Adrian Aeschlimann); Filling Line I & II Equipment Preparatio (Werner Steiner); Filling Line I & II Group 1 (Urs Cotting); Filling Line I & II Group 2 (Markus Rindisbacher); Filling Line I & II Group 3 (Bruno Zuercher); Filling Line I & II Group 3 (Roland Gerber); Filling Line I&II Pasteurisat./Incubat. (Eduard Wittwer); Filling Line II Pasteurisat./Incubat (Roland Lerch); Filling Line III & IV (Mathias Beck); Filling Line III (Mathias Beck); Filling Line III Group 1 (Sasa Lazarevic); Filling Line III Group 2 (Christian Schmid); Filling Line III Group 2 (Daniel Kraehenbuehl); Filling Line III Group 3 (Ulrich Beat Wildi); Filling Line III Support (Daniel Kraehenbuehl); Filling Line IV (Jean-Claude Cauderay); Filling Line IV, Lyo & Support (Alexander Kissler); Filling Line IV, Lyo & Support (Jean-Claude Cauderay); Filling Line V (Andrea Jantsch); Filling Line V (Anna Meier); Filling M 305 (Esther Seidel); Filling Non Privigen (Mayur Bannore); Filling Operations (Chenyi Guo ?????); Filling Operations (Wei Xiao ); Filling Privigen (Narelle Urli); Filling Privigen (Tyson Parker); Filling Support (Andrew Marshall); Filling Toll Plasma (Dale Peel); Filling Toll Plasma (Laxman Trichinapalli); Filling Toll Plasma (Narelle Urli); Filling Toll Plasma (Peter Koelmeyer); Filling Toll Plasma (Rebecca Hayes); Filling Toll Plasma (Robert La Ferla); Filling V Group 1 (Eike Gredler); Filling V Group 1 (Roger Wamister); Filling V Group 1 (Thomas Daehler); Filling V Group 2 (Michael Roos); Filling V Group 2 (Thomas Daehler); Filling and Packaging (Narelle Urli); Filling and Packaging (Tyson Parker); Filling/Lyo/Visual Inspection (Michael Gisler); Final Product Manufacturing / Production Services (Othmar Geisser); Final Product Planning (Ingo Kling); Final Product Planning (Jan-Christopher Gerlach); Final Product Planning (Julian Knabeschuh); Finance & Accounting (Eisuke Kofugata); Finance & Accounting (Haruka Utsugi (Inherited)); Finance & Accounting Japan (Izumi Yoshida - ); Finance & Accounting Japan (Izumi Yoshida ??? ??? - ??? ????); Finance & Controlling (Devun Dusoruth); Finance & Controlling (Ebru Kuntay); Finance & Controlling (Jonathan Ho); Finance & Controlling (Justin Mericle); Finance & Controlling HU (Togya Gergely); Finance & Controlling Italy (Annalisa Saracchi); Finance (Amy Kishikawa); Finance (Avi Singh); Finance (Becky Heatherman); Finance (Carolyn Xiong ????); Finance (Damian Gaylor); Finance (Daniel Janides); Finance (Fergus Patrick McLellan); Finance (Frank Liesner); Finance (Harold Chan (Inherited)); Finance (Helen Gearing); Finance (Jacqui Lomas (Inherited)); Finance (Jacqui Lomas); Finance (John Levy); Finance (Karol Bian ?????); Finance (Kate Tse); Finance (Ken Lim); Finance (Konstantin Petropoulos); Finance (Lena Shi ????); Finance (Luke McMahon (Inherited)); Finance (Luke McMahon); Finance (Melody Orbiso); Finance (Nicole Pryde); Finance (Nishant Popat); Finance (Shalini Goundar); Finance (Siu-Yin Yu ?????); Finance (Vicci Quagliarella); Finance (Wolfgang Thiel); Finance (Xiaowei Yin ); Finance (Xiaowei Yin ?????); Finance / Tax Marburg (Fatma Kremser); Finance Belgium (Jrgen Bond); Finance Belgium (J√∂rgen Bond); Finance Business Partner Commercial EMEA (Simon Briscoe); Finance Business Partnering Operations (Daniel Janides); Finance Business Partnership (Jason Sowles); Finance Business Partnership (Michael T McAvoy); Finance Business Partnership I (Michael T McAvoy); Finance China (Karol Bian ?????); Finance Commercial LATAM (KRISTINE SOLOMON (Inherited)); Finance Commercial LATAM (Martin Descotte); Finance Czech Republic / Slovakia (Libor Ballek); Finance Director: Lead Business Partner (Sharon Tindley); Finance EU Commercial (KRISTINE SOLOMON (Inherited)); Finance EU Commercial (Kristine Solomon (Inherited)); Finance France (Charlotte Rougi√©); Finance Global Commercial (KRISTINE SOLOMON); Finance Global Commercial (Kristine Solomon); Finance Greece (Christos Papadatos (??????? ?????????)); Finance Greece (Efstathios Lymperopoulos); Finance Netherlands (Jrgen Bond); Finance Netherlands (J√∂rgen Bond); Finance Nordic (Carl Werner); Finance Portugal (David Roig Martinez); Finance Spain (David Roig Martinez); Finance UK (Paul Fellingham); Finance US Commercial Operations (Robert Smith Jr); Finance and Admin (Elena Kondrashova); Finance and Controlling (Devun Dusoruth (Inherited)); Finance and Controlling (Doris Kamtner); Finance and Controlling (Franz Gr√ºn, Doris Kamtner); Finance and Controlling (Ulrike Bridi); Finance, Accounting & Reporting (Beeharrylall Jeetun); Finance, Business Modelling (Helen Gearing (Inherited)); Financial Planning & Analysis (Duncan Webber); Financial Planning and Analysis (Haruka Utsugi); Financial Planning and Analysis 1 (Christopher Pulupa); Financial Planning and Analysis 2 (Christopher Pulupa (Inherited)); Financial Planning and Analysis 2 (Daniel Ganaishlal); Financial and Reporting Accountant (Hayley Jackson); Financial and Reporting Accountant (Ryan Brown [C]); Financial and Reporting Accounting (Callum Bircham); Finishing & Infrastructure (Laurent Wagner); Finishing & Infrastructure (Roger Stoffel); Flint 161 (Anjela Johnson); Flint 161 (Carlotta McCoy); Flint 161 ACM Area 1 (Janie Cary); Flint 161 ACM Area 1 (Trina L Bryant); Flint 161 ACM Area 2 (Carlotta McCoy (Inherited)); Flint 161 ACM Area 2 (Khatija Moiz); Flint 161 QA (Allante S Williams); Flint 161 QA (Andrea K Coleman); Forecasting & Strategic Analytics (Joshua Prince); Forecasting & Strategic Analytics (MANISH SRIVASTAVA); Forecasting, Compliance & CTA Operations (Jutta Neufang-Hueber); Formulation (Remon Hemaya); Formulation Development (Ahmad Abdul Fattah); Formulation Development (Di Goodall); Formulation Development (Heidi Elmer Bodnar); Formulation Development (Hywel Williams); Formulation Development (Michelle Zhuravlyova); Formulation Development (Nathan Edwards); Formulation Development (Richard Shalders); Formulation Development (Scott Thompson); Formulation Project Process (John Riley); Formulation Project Process (Marion Taligault Owens); Formulation Shift A (David Rimmer); Formulation Shift B (David Rimmer); Formulation Shift B (Matthew Storey); Formulation, Lyo & Stability Development (FLS) (Uwe Liebing); Fort Collins 705 (Erin J Zwalina); Fort Collins 705 ACM Area 1 (Michael A McNear); Fort Collins 705 ACM Area 2 (Jeremy M Kuehn); Fort Collins 705 QA (Christi Bringle); Fort Smith 278 (David Ensminger (Inherited)); Fort Smith 278 (Rachael Kirby); Fort Smith 278 ACM Area 1 (Rachael Kirby); Fort Smith 278 ACM Area 1 (Tammy Semiche); Fort Smith 278 ACM Area 2 (Russell Perez); Fort Smith 278 QA (David Ensminger (Inherited)); Fort Smith 278 QA (Whitney Jacobs); Fort Wayne 089 (Malori A Shields); Fort Wayne 089 (Rob Garcia); Fort Wayne 089 ACM Area 1 (Timothy R Albright); Fort Wayne 089 ACM Area 2 (Chad Rudolph); Fort Wayne 089 QA (Chris Cusack); Fort Wayne 089 QA (Erik Plate (Inherited)); Fort Wayne 089 QA (Gretchen Watkins); Fort Wayne 089 QA (Mitch A Quinn); Fort Worth 419 (Angelica M Henry); Fort Worth 419 (Sarah E Silva); Fort Worth 419 ACM Area 1 (Eddie S Rosas); Fort Worth 419 ACM Area 2 (Jennyfer Delacruz); Fort Worth 419 ACM Area 2 (Martel Carter); Fort Worth 419 ACM Area 3 (Angelica M Henry (Inherited)); Fort Worth 419 ACM Area 3 (MacGregor Roy); Fort Worth 419 QA (Rochelle L Shannon); Fractionation & Bulk (Michael Beyeler); Fractionation & Bulk (Roger Stoffel); Fractionation & Bulk (Sven Schwerdtfeger); Fractionation & Bulk (Zabdiel Dominguez); Fractionation (Simon Scheidegger); Fractionation Group 1 (Fritz Liechti); Fractionation Group 2 (Adrian Locher); Fractionation Group 3 (Christian Stucki); Fractionation Group 4 (Walter Strahm); Fractionation Group 5 (Urs Durtschi); Fraktionierung & Filtration (Kai Erkel); Fraktionierung (Rainer Frank (Inherited)); Franchise Medical Affairs Team 1 (Emna Bourkhis); Franchise Medical Affairs Team 1 (Nabil Moumane); Franchise Medical Affairs Team 2 (Hasan Catovic); Fredericksburg 703 (Sara M Schuppe); Fredericksburg 703 (Sheri Mixon (Inherited)); Fredericksburg 703 ACM Area 1 (Juan Manuel Castillo); Fredericksburg 703 ACM Area 2 (Mykel A Gonzales); Fredericksburg 703 QA (Gracie P Melendez); Front Line QA (Amanda Cooper); Ft Collins 705 QA (Christi Bringle); Ft. Gratiot 170 (Adrienne Smith); Ft. Gratiot 170 (Desiree Wright); Ft. Gratiot 170 ACM Area 1 (Cortney Young); Ft. Gratiot 170 ACM Area 2 (Karri Mitchell); Ft. Gratiot 170 QA (Allante S Williams); Ft. Gratiot 170 QA (Breanna Mini); Ft. Gratiot 170 QA (Melissa Johnson); GCSP & PhV Regions (Richard Wolf); GCSP Global Regions (Lana Gloukhova); GCSP Regions & Pv Operations (Kevin Burke (Inherited)); GCSP Regions & Pv Operations (Kevin Burke); GCSP Regions & Pv Operations 1 (Mark McGinnis); GCSP Regions (Angela Long); GCSP Regions (Rawad Antoun); GCSP Regions Lead Asia Pacific (Sophie Fontez); GCSP Regions Lead ECI (Simone Lorenz-Asmus); GCSP Regions Lead EU (Nicole Avalos); GFSS Asia Pac (Noopur Pattni (Inherited)); GFSS ¬ñ Asia Pac (Madison Crawford); GM & Staff (Sarah Yeung); GMP Compliance & Packaging Excellence (Thorsten Keller); GMP Training (Ann Moody); GPSS BRN & LNG (Denis Klochkov); GPSS BRN & LNG (Stephanie Schoch); GPSS USA (Michael Burdick); GRA CMC (Eva Walter); GRA CMC BMW (Siew Cheng Ney); GRA CMC BMW Team 1 (Nicole Apostolidis); GRA CMC BMW Team 2 (Libby Brodie); GRA CMC BRN (Luca Reggiani); GRA CMC BRN Team 1 (Petra Truetsch); GRA CMC BRN Team 2 (Dominique Schaller); GRA CMC BRN Team 2 (Sabine Rohner); GRA CMC BRN Team 3 (Grzegorz Podrygajlo); GRA CMC BRN Team 3 (Karin Stein-Liesen); GRA CMC KAN (Rick Khuu); GRA CMC KAN (Ricky Khuu); GRA CMC KAN Team 1 (William K Mendell); GRA CMC KAN Team 2 (Olga Neumller); GRA CMC KAN Team 2 (Olga Neum√ºller); GRA CMC KAN Team 3 (William K Mendell); GRA CMC MBR (Lene Nielsen); GRA CMC MBR (Martin Opper); GRA CMC MBR Team 1 (D√∂rthe Vingerhoet); GRA CMC MBR Team 1 (Markus Kuhl); GRA CMC MBR Team 2 (Thomas Nassauer); GRA CMC MBR Team 3 (Antje Mehrer); GRA CTA Operations Group (Florin Muraru); GRA GPS CV, Metabolism & Adv Therapies (Scott Hambaugh (Inherited)); GRA GPS Cardiovascular & Metabolism (Scott Hambaugh (Inherited)); GRA GPS Hematology & Thrombosis (Scott Hambaugh); GRA GPS Hematology & Thrombosis (Sibylle Kaiser); GRA GPS Immunology & Neurology (Lauren Tornetta); GRA GPS Immunology & Neurology (Scott Hambaugh (Inherited)); GRA GPS Immunology (Scott Hambaugh (Inherited)); GRA GPS Inflammation & Transplant (Hartmut Landgrebe); GRA GPS Respiratory (Melissa Tokosh); GRA GPS Transplant (Hartmut Landgrebe); GRA LATAM (Gordana Joksimovic); GRA Planning Group (Martin Steinmann); GRA Region EU & Switzerland (Anke Arnold); GRA Region EU & Switzerland (Bettina Doepner); GRA Region EU & Switzerland (Birgit Sommer (Inherited)); GRA Region EU & Switzerland (Birgit Sommer); GRA Region EU & Switzerland (Martina Schneider); GRA Region EU & Switzerland (Paolo Voltolina); GRA Region EU & Switzerland (Pedro Manuel Regateiro de Moura Campino); GRA Region EU & Switzerland (Stefanie Zaugg); GRA Region EU & Switzerland (Wencke Maeder-Wotruba (Inherited)); GRA Region EU & Switzerland (Wencke Maeder-Wotruba); GRA Region EU & Switzerland (Wolfgang Friedrich); GRA Region NA - AdPromo (John Hill); GRA Region NA - Hematology (Tara Chapman); GRA Region NA - Immnunology (Angela D Azzara); GRA Region NA ¬ñ CMC (Todd Olson); GRA Region NA ¬ñ CV/Transplant (Uros Djekic); GRA Region North America (Tara Chapman); GRA Resourcing (Silke Britschock); GSP Operations (Liwei Sun ); GSP Operations (Liwei Sun ?????); Gainesville 182 (Laila Matthews-El); Gainesville 182 ACM Area 1 (Toya Green); Gainesville 182 ACM Area 2 (Laila Matthews-El (Inherited)); Gainesville 182 ACM Area 2 (Leslie L Heidelberg); Gainesville 182 QA (Elvern M Gregg); Gainsville 182 (Deidra Snow-Johnson); Gainsville 182 (Laila Matthews-El); Gainsville 182 QA (Elvern M Gregg); Gastonia 267 (Mai Yang); Gastonia 267 ACM Area 1 (Terri L Salsman); Gastonia 267 ACM Area 2 (Scotty Burch); Gastonia QA 267 (Blake Painter); Gbl Commercial Operations-Cardiovascular (Debbie Drane (Inherited)); Gbl Commercial Operations-Cardiovascular (MaryAnn Capritti); Gene Therapy (Karsten Peppel); Gene Therapy (Orit Wolstein); Gene Therapy Research (Cdric Vonarburg); Gene Therapy Research (C√©dric Vonarburg); Gene Therapy Research I (Florian Aeschimann); Gene Therapy(Orit Wolstein); General Ledger (Tanja Bieri); General Ledger (Tanja Gurtner); General Ledger Accounting (Thierry Bonjour); General Product Characterisation (Robert Dickinson); General Product Characterisation (Tom Murray-Rust); Geringfgig Beschftigte (Andreas Gehrich); Geringfgig Beschftigte (Annette Pernitzsch); Geringfgig Beschftigte (Claudia Habenicht); Geringfgig Beschftigte (Kirsten Scheibel); Geringfgig Beschftigte (Natascha Tappendorf); Geringfgig Beschftigte (Stephani Keltsch); Geringfgig Beschftigte (Sven Schuhmann); Geringf√ºgig Besch√§ftigte (Andreas Gehrich); Geringf√ºgig Besch√§ftigte (Annette Pernitzsch (Inherited)); Geringf√ºgig Besch√§ftigte (Annette Pernitzsch); Geringf√ºgig Besch√§ftigte (Claudia Habenicht (Inherited)); Geringf√ºgig Besch√§ftigte (Claudia Habenicht); Geringf√ºgig Besch√§ftigte (Kirsten Scheibel (Inherited)); Geringf√ºgig Besch√§ftigte (Kirsten Scheibel); Geringf√ºgig Besch√§ftigte (Natascha Bock (Inherited)); Geringf√ºgig Besch√§ftigte (Stephani Keltsch); Geringf√ºgig Besch√§ftigte (Sven Schuhmann (Inherited)); Geringf√ºgig Besch√§ftigte (Sven Schuhmann); Gertevorbereitung (Roman Truttmann); Ger√§tevorbereitung (Roman Truttmann); Gesundheitsschutz (J√ºrgen Kanand (Inherited)); Gesundheitsschutz (zcan Campinar (Inherited)); Gesundheitsschutz (√ñzcan Campinar (Inherited)); Gilbert 131 (Daniel I Villegas (Inherited)); Gilbert 131 (Erica L Ewing); Gilbert 131 ACM Area 1 (Christopher Hersch); Gilbert 131 ACM Area 1 (Michael Brownell); Gilbert 131 ACM Area 2 (Karen Branch); Gilbert 131 QA (Karen Branch); Gilbert 131 QA (Will Porter); Glassware Washing (Jasmina Trumic); Glen Burnie 136 (Bryon Wiley); Glen Burnie 136 (Guillian T Gonzalez); Glen Burnie 136 ACM Area 1 (Janet Rhys-Jones); Glen Burnie 136 ACM Area 2 (Solyana Gebrekidan); Glen Burnie 136 QA (Monique L Fitz); Global Analytical & Technical Services (Stephen Case); Global Analytical Science & Technology (Jeffrey Pederson); Global Analytical Science & Technology (Stephen Case); Global Applications Testing (Kinga Zambo); Global Artwork (Karelle Phelan); Global Automation (Christoph Zahnd (Inherited)); Global Automation (Hans Wieser); Global Batch Release (Karen Marks); Global Benefits (Judith Kleemeier); Global Bioanalytical (Jeffrey Michael Hey); Global Bioanalytical (Matthias Zimmermann); Global Bioanalytical (Vanessa Sandford); Global Bioanalytical Sciences (Jeff Hey); Global Business Technology (John A Newsom); Global CQA Systems (Carmen Szeto); Global Case Management (Kevin Burke); Global Case Management (Monika Klug); Global Case Managment (Joanne Grego); Global Category Gels, Resins, Media & Process Aid (Stephan Heer); Global Category Lead R&D (Benjamin Prior); Global Change Management and Communications (Linda Hagerty-Dotterer); Global Clinical Dev Hematology & Thrombosis (Marcus Carr); Global Clinical Development (Frank Albano); Global Clinical Development (William Mezzanotte (Inherited)); Global Clinical Development (William Mezzanotte); Global Clinical Development Hematology & Thrombosis (Marcus Carr); Global Clinical Ops - Transplant (Michele Jenkins); Global Clinical Programs (Blanca Salazar); Global Clinical Programs 2 (Ingo Pragst); Global Clinical Quality Assurance (Claudia Fellmer); Global Clinical Quality Assurance (Volker Nickel (Inherited)); Global Commercial Insights & Analytics (Nitin Bhatnagar); Global Commercial Operations (William Campbell); Global Content (Greg Healy); Global Data Management (Jennifer Toomey); Global Doc Systems (Dominik Zuercher); Global EH&S, Resilience and Risk Management (Lynette Hodgden); Global Engineering (Christoph Zahnd); Global Engineering Projects (Darrah Wilkerson); Global Export Order Fulfillment (Sabine Hmel); Global Export Order Fulfillment (Sabine H√§mel); Global Finance (David Lamont); Global Financial Operations (David Lamont (Inherited)); Global Financial Operations (Karen Neave); Global HR Business Partner Organization (Doug German); Global HR Operations (Sara Proctor [C]); Global HR Operations (Sara Proctor); Global HR Services and Payroll (Mark Hickenbottom); Global HR Systems (Garnett Hudson); Global HR Systems (Jeff Allen); Global HR Systems (Melissa Zyla); Global HR Systems I (Jeff Allen); Global HR Systems II (Melissa Zyla); Global HR Systems and Reporting (Andrea Dauphinee); Global HR Systems and Reporting (Patricia Berdugo); Global HR Systems and Reporting (Rob Nyhan); Global Health Economics and Reimb (Girishanthy Krishnarajah); Global Health Economics and Reimb (Ryan Saadi); Global Healthcare Policy & Ext. Affairs (Dennis Jackman); Global Healthcare Policy & Ext. Affairs (Michael Ruggiero); Global Human Resources (Andrea Resch); Global Human Resources (Elizabeth Walker); Global Human Resources Business Partner Organization (Doug German); Global Human Resources Business Partner Organization (Gyuri Endes); Global IP (Beate Binsack); Global IT PMO (Lance Runyard); Global IT Service Delivery (Sam Vickers); Global IT Service Delivery (Sunil Shah); Global Indirect Procurement (James Thomas); Global Indirect Sourcing (John Ehgartner); Global Intellectual Property (GIP) (Hans-Peter Hauser); Global Intellectual Property (Peter Gomme); Global Internal Communications (Michele Darnell); Global LIMS (David Johnstone); Global Labeling (MARK COLLINS); Global Labeling (Maryann Cuomo); Global Labeling Operations (Maricarmen Dilone-Raposo); Global Labelling Operations (Lynda Rizzetti); Global Legal (Gregory Boss); Global Legal (John Michael Minardo); Global Legal (John Minardo); Global Legal Operations & Services (Lauren Neal); Global Legal Services (Adarsh Nair); Global Legal Services (Kieran O''Shea); Global Library Service Operations (Ulrike Friebertshuser-Jilke); Global Library Service Operations (Ulrike Friebertsh√§user-Jilke); Global Licensing (Andrea Huggins); Global Licensing (Michael Jorgensen); Global Logistics (Marianne McDonald); Global Logistics (Paul Wolstencroft); Global Logistics (Uli Kiefer); Global Logistics Operations (Suzanne Johnson (Inherited)); Global Logistics Team (BP Pedersen); Global MS&T (Irina Staxen); Global MSPR (Christoph Hck); Global MSPR (Christoph H√∂ck); Global Manufacturing (Chris Larkins); Global Manufacturing Ops & Plasma (Tony HARTMAN); Global Marketing Team Cardiovascular (KAMRAN MAMMADOV); Global Medical Affairs (Gregg Sylvester); Global Medical Affairs (Marcus Stockschlder); Global Medical Affairs (Marcus Stockschl√§der); Global Medical Devices & Primary Packaging Materials (Declan Reilly); Global Medical Evaluation (Federico Melo Ferrer); Global Medical Evaluation (Federico Melo-Ferrer); Global Medical Evaluation (Sabine Hrtel); Global Medical Evaluation (Sabine H√§rtel); Global Mobility COE (Ulrike Krenz-Fisher); Global Operation & RCF Administration (Margrit Hug Bucher); Global Operational Excellence and BPM (Fabrice Gribon); Global Operational Excellence and BPM (Stephen Marlow (Inherited)); Global Operations & Quality Finance (Helen Gearing (Inherited)); Global Operations & Quality Finance (Jacinta Glennon); Global Operations (Chris Larkins); Global Operations (Stephen Marlow); Global Operations (Val Romberg); Global Packaging (Warren Comerford); Global Packaging (jerome serrurier); Global Packaging Design and Artwork (Andrew John Robinson); Global Packaging Design and Artwork (Eva Streit); Global Packaging Services (Rick A Majszak); Global Packaging Services (Samantha Czako (On Leave)); Global Packaging Services (Samantha Czako); Global Pathogen Safety (Birgit Popp); Global Pathogen Safety (Eleonora Widmer); Global Pathogen Safety (John Liu); Global Pathogen Safety (Nathan Roth); Global Pathogen Safety - Marburg (Birgit Popp); Global Pathogen Safety - Marburg (Bj√∂rn Keiner); Global Pathogen Safety Support (Eleonora Widmer); Global Pathogen Safety Support Asia (Connie Broumis); Global Payroll (Christina Avraamides); Global Payroll (Marjorie Platt); Global Payroll (Mark Hickenbottom (Inherited)); Global Pharmacovigilance Quality Assurance (Claudia Nolte); Global Planning (Harald Berg); Global Planning (Jamie Pritzl); Global Planning, Supply Chain (Chad Salisbury); Global Planning, Supply Chain (Steve Moloney); Global Plasma Team (Dieter Brazel); Global Plasma Technology Ownership (Benno Bitterli); Global Portfolio Influenza (Jane Lanteri) (Jane Leong); Global Pricing (John Hakanson); Global Product Characterization (Stefan Schmidbauer); Global Product Specifications and Reference Standards (Richard Steere); Global Product Strategy (GPS) (Scott Hambaugh); Global Publishing (Marquis Bryant); Global Publishing (Timothy Huke); Global QA, IT (Patricia Hernan Miller); Global Quality (Vasilis Mavrogenis); Global Quality - Americas (Carlos Torres); Global Quality Affiliates (Collins Onyejese); Global Quality Affiliates (Laura O''Brien (Inherited)); Global Quality Assurance (Karen Netherton); Global Quality Control (Brian Nunnally); Global Quality Management (Jeffrey A Alcorn); Global Quality Management (Laura O''Brien); Global Quality Management (Rushen Mendis); Global Quality Management (Sanjana Sanjappa); Global Quality Management 2 (Brian Walker); Global Quality Management Sys. (Allen F Coleman); Global Quality Management Sys. (Eva M. Urban (Inherited)); Global Quality Management Sys. (Eva M. Urban); Global Quality Management Sys. (Jeffrey A Alcorn (Inherited)); Global Quality Management Sys. (Stephen A Wilson); Global Quality Management Sys.(Steve Wilson); Global Quality Management Systems (Carol Kidwell (On Leave)); Global Quality Management Systems (Carol Kidwell); Global Quality Operations (Matthias Pohl); Global Quality Systems & Compliance (Milka Smoljko); Global Quality Systems (Chad M Salisbury); Global Quality and R&D IT Systems (Tim Jones); Global Quality, Bus Svcs & Finance HR (Doug German (Inherited)); Global Quality, Bus Svcs & Finance HR (Stephanie McKinney); Global R&D - Financial Rptng & Analysis (Krista Doron); Global R&D Early Development CMO (Diana Lanchoney); Global R&D Finance (Christopher James Thorpe); Global R&D Finance (Karen Neave (Inherited)); Global R&D Finance (Pamela Cerovich); Global R&D Project Management (David S Fifer); Global R&D Project Management (Diana Lanchoney); Global R&D Project Management (Heiko Riedel); Global R&D Project Management (Heiko V√∂lpel); Global R&D Project Management (Jennifer Dutton); Global R&D Project Management (Lena Ohannesian); Global R&D Project Management (Martin Broder); Global R&D Project Management (Peter Douglas); Global R&D Project Management (Rose Fida); Global R&D Project Management (Steven Brooks); Global R&D Project Management 1 (Peter Douglas); Global R&D Project Management 2 (Christian Spyr); Global R&D Project Management 2 (Gino Vairo); Global R&D Project Management 2 (Nancy Fetrow (Inherited)); Global R&D Project Management 2 (Regula Heini Hodel) (Christian Spyr); Global R&D Project Management 3 (Christiane Enzinger); Global R&D Project Management 3 (David Leacy (Inherited)); Global R&D Project Management 3 (Katy Dimitropoulos); Global R&D Project Management 3 (Rose Fida); Global R&D Project Management 4 (Laura J Schweigert); Global R&D Project Management I (David S Fifer); Global R&D Project Management II (Heiko Riedel); Global R&D Project Management III (Jennifer Dutton); Global R&D Project Management Immunology (Linda Faux (Inherited)); Global R&D Project Management Immunology (Steven Brooks (Inherited)); Global R&D QA CMC & Research (April Sena); Global R&D Quality Assurance (Karen Gard''ner); Global Recombinant Portfolio Group (Lene Nielsen); Global Records (John Neff (Inherited)); Global Reg Affairs ANZ (Kellie Hooley); Global Regulatory Affairs (Ashley Burt); Global Regulatory Affairs (Catarina Edfjaell); Global Regulatory Affairs (Emmanuelle LECOMTE BRISSET); Global Regulatory Affairs (Franck Nicolas); Global Regulatory Affairs (Mary Ryan); Global Regulatory Affairs Quality Compliance (Jana Reitmajer); Global Regulatory Affairs Quality Compliance (Monika Dietrich-Sander); Global Regulatory Affairs Quality Compliance (Rafael Sierra); Global Regulatory Lead - QIVc (Karen Jourdan-Brown); Global Regulatory Operations and Labelling (Emma Williams); Global Regulatory Systems&Informat.Mgmt. (Christine Berger); Global Regulatory Systems&Informat.Mgmt. (Franck Nicolas (Inherited)); Global Reporting (Harsha Kadiyala); Global Research & Development (Andrew Cuthbertson); Global Research & Development (William Mezzanotte); Global Risk & Insurance Management (John Marren); Global Risk Management (Mark Luksic); Global SAP Security & BTGC (Steven Yannelli); Global SC Operations (Tina Law); Global Sales & Operations Planning (Ben Wilson); Global Scientific Exellence (Maria M√ºller); Global Security (Tony Strickland); Global Serialization (Michel B√©raud); Global Shared Services (Liam Connelly); Global Site Management (Lauren Ruth Vickery); Global Site Management (Lauren Vickery); Global Sourcing (Paul Addis); Global Sourcing - Automation, Instrumentation, Packaging and Aseptic Filling (Iouri Sverdlov); Global Sourcing Logistics (Gela Bakuridze); Global Sourcing Logistics (John Ehgartner (Inherited)); Global Strategic Sourcing, Chemicals (Jun Gao); Global Supplier Quality EMEA (Hans-Jrgen Schning); Global Supplier Quality EMEA (Hans-J√ºrgen Sch√∂ning); Global Supply Chain (Ian Dick); Global Systems Maintenance (Regina Mhlich); Global Systems Maintenance (Regina M√ºhlich); Global Talent Acquisition (Brian Fehrer); Global Talent Acquisition (Melissa Bradford); Global Tax (Aoife Deane); Global Total Rewards (Elizabeth Walker (Inherited)); Global Total Rewards (Maynard Branscome); Global Trademarks (Nicole Smith); Global Transport Validation (Matthew Wokulich); Global Validation (Russell Ciliento); Global Validation (Russell James Ciliento); Government Rebate Operations (Joseph DeLuca); Government Vaccines Manager 745 (Helen Dela Cruz); Graduates (David Azzopardi); Grand Blanc 244 (Kelly M Weng); Grand Blanc 244 ACM Area 1 (BRANDON SMITH); Grand Blanc 244 ACM Area 1 (LC Davis); Grand Blanc 244 ACM Area 2 (ROBERT MANGOLD); Grand Blanc 244 QA (Martina Young); Grand Junction 159 (Daniel Venn); Grand Junction 159 (Markah Williams Mower); Grand Junction 159 (Markah Williams); Grand Junction 159 ACM Area 1 (Steven Potter); Grand Junction 159 ACM Area 2 (Richard S Simpson); Grand Junction 159 ACM Area 2 (Rob Ferguson); Grand Junction 159 QA (Carrie E Pell); Grand Junction 159 QA (Kelly M Weng); Grand Prairie 049 (Angelica M Henry); Grand Prairie 049 (Jamie Bullock); Grand Prairie 049 ACM Area 1 (Kelly Gomez); Grand Prairie 049 ACM Area 2 (Deonka Whitley); Grand Prairie 049 QA (LaDonnica L Eddings); Grants Management (Abraham Smith); Grants Manager (Abraham Smith); Greece 194 (Jontus Walker); Greece 194 (Tangerine Tingle); Greece 194 ACM Area 1 (Mike Massaro (On Leave)); Greece 194 ACM Area 1 (Mike Massaro); Greece 194 ACM Area 2 (Arooj Hussain); Greece 194 QA (Ariel S Forrest); Greece 194 QA (John L Thixton (Inherited)); Greece 194 QA (Todd Wolfe); Greeley 615 (Natasha D Casillas); Greeley 615 (Skyler T Campbell); Greeley 615 ACM Area 1 (Natasha D Casillas (Inherited)); Greeley 615 ACM Area 1 (Natasha D Casillas); Greeley 615 ACM Area 2 (Rita E Williams); Greeley 615 QA (Meghan Fryer); Greensboro 117 (Susan Watkins (On Leave)); Greensboro 117 (Susan Watkins); Greensboro 117 ACM Area 1 (Dorleans Alce); Greensboro 117 ACM Area 1 (Kristen Jones); Greensboro 117 ACM Area 2 (Kristie Cunningham); Greensboro 117 QA (Stephanie Bernard); Greenville 088 (Andrea S Zeller); Greenville 088 ACM Area 1 (Andrea S Zeller); Greenville 088 ACM Area 1 (Jeremy Honea (On Leave)); Greenville 088 ACM Area 2 (Natasha Pinson); Greenville 088 QA (LeeAnn M Estes); Gresham 055 (Brandy J Vaughan); Gresham 055 ACM Area 1 (Noah S Johnson); Gresham 055 ACM Area 2 (Becca Daugherty); Gresham 055 QA (Dijana Colic); Group 3 (Adrian Alder); Group Analysis (Dean Barrett); Group Analysis (Dean Wilde); Group Analysis (Maureen Harrington); Group Controller (Helen Gearing (Inherited)); Group Controller (Jacob Weaver); Group Controller (Noopur Pattni); Group Finance (Daya Salter); Group Finance (Jason Mugridge); Group Finance (Kevin Personius); Group Finance 1 (Jeffrey Marchetti); Group Finance II (Troy Kukorlo); Group Finance- Financial Systems (Mary Conlin); Group Grzhausen (Uwe Sthr); Group G√∂rzhausen (Michael Engel); Group G√∂rzhausen (Uwe St√∂hr); Group Hauptwerk (Michael Engel); Group Income Protection (Emma McCarthy); Group Lead Validation-Site Expansion (Robert Musgrave); Group Main Site (Michael Engel); Group Reporting (Michael J. Clark); Group Tax (Peter Larsen); Group Taxation (Michael Manusov); Gulfport 122 (Elishia Humphrey); Gulfport 122 (John E Hunt (Inherited)); Gulfport 122 (Joshua D Harper); Gulfport 122 (Robert Spicer); Gulfport 122 ACM Area 1 (Joshua D Harper); Gulfport 122 ACM Area 2 (Bernetta L Huff); Gulfport 122 ACM Area 2 (Joshua D Harper); Gulfport 122 QA (Regina Williams); GxP Training (Carla Oliver); GxP Training (Vicky Lioutas (Inherited)); HAE & Respiratory (Sylvia Herget); HAE & Respiratory 1 (Susann Hofmockel); HAE Marketing (Amy Bifolco-Morrell); HAE Marketing (Tom Groeling (Inherited)); HAE Marketing (Tom Groeling); HR Business Partner (Sandro Krug); HR Business Partner (Tanja Templer); HR Business Partner 1; HR Business Partner 1 (Tina Camenzind); HR Business Partner 2 (Johanna Wege); HR Business Partner 4 (Darja Skaza-Brock); HR Commercial & Legal (Susan Devlin); HR Development & Programs (Sabine Wagner); HR Finance, R&D, IT & Business Services (Carolyne Malizia); HR Finance, R&D, IT, Bus Services (Carolyne Malizia); HR Holly Springs (Shandalyn Hope Matson); HR Holly Springs (Shandalyn Matson); HR Liverpool (Emma McCarthy); HR Liverpool (Sheila Redmond [C]) (Sheila Redmond [C]); HR Liverpool Business Partners (Kerry Rimmer); HR Marketing (Martin Stump); HR Marketing (Nadine Reh); HR Operations & Quality (Judi Badenoch); HR Operations (Adam Williams); HR Operations (Claudia Petrocchi); HR Operations (Elizabeth Walker (Inherited)); HR Operations (Lindsay Heaton); HR Operations (Mike Drew); HR Operations (Sara Proctor [C]); HR Ops Enabler Tools (Jennifer Sullivan (On Leave)); HR Ops Enabler Tools (Jennifer Sullivan); HR Ops Process Excellence (Anna Tassone); HR Ops Support (Kai Hofmann); HR Parkville (Yvette Saunders); HR Payroll (Christina Avraamides); HR Process and Portfolio Management (Beth Swiezkowski); HR Projects (KT Leong); HR Talent Acquisition & Talent Management (Beth Thomas); HR Talent Acquisition Holly Springs (Blake Derrick); HR Talent Acquisition Maidenhead (Louise Hawkes); HR Talent Acquisition Parkville (Angela Bellenger); HR Total Rewards (Gyuri Endes (Inherited)); HR Total Rewards (Karen Vyse [C]); HRBP Corporate Functions (Mafalda Lou); HS Manufacturing Fill & Finish Ops (Brian Kennedy); HU CSL Plasma Kft. Center Debrecen (Halsz Roland); HU CSL Plasma Kft. Center Debrecen (Hal√°sz Roland); HU CSL Plasma Kft. Center Miskolc (Ruzsinszki Ibolya); HU CSL Plasma Kft. Center Ny√≠regyh√°za (Rask√°n√© Petruska Gy√∂ngyi); HU Center Employee (Raskn Petruska Gyngyi (Inherited)); HU Center Employee (Rask√°n√© Petruska Gy√∂ngyi (Inherited)); HU Center Employee (Ruzsinszki Ibolya (Inherited)); HU Center Manager (Ruzsinszki Ibolya (Inherited)); HU Center Physician (Raskn Petruska Gyngyi (Inherited)); HU Center Physician (Rask√°n√© Petruska Gy√∂ngyi (Inherited)); HU Center Physician (Ruzsinszki Ibolya (Inherited)); HU Quality (Duds kos); HU Quality (Dud√°s √Åkos); HVAC & Coldrooms (Nozar Basseri); HVAC (Anna Fellenberg); HVAC (Juerg Schwarz); HVAC (Simon Hediger); HVAC / Reinr√§ume (Nozar Basseri); HVAC 1 (Urs Turtschi); Haemostaseology (Bernhard Czapla (Inherited)); Haemostaseology (Claudia Bachmann (Inherited)); Haemostaseology (Heinrich Feischen (Inherited)); Haemostaseology (Holger Milkereit (Inherited)); Haemostaseology (Michael Bernd Rode (Inherited)); Haemostaseology (Rainer Adam (Inherited)); Haemostaseology (Ralf Kosmol); Haemostaseology (Susanne Mller (Inherited)); Haemostaseology (Susanne M√∂ller (Inherited)); Haemostasis (Anthony Downes); Haemostasis (Elias Francis (Inherited)); Haemostasis (Elias Francis); Haemostasis (George Tsirbas); Haemostasis (Gerry Orval); Haemostasis (John Bashour); Haemostasis (Roy Taylor); Haemostasis (Shane Bartils); Haemostasis (Steven Barello); Hagerstown 174 (Bukola Raji); Hagerstown 174 (Kashaun Muhammad); Hagerstown 174 ACM Area 1 (Antonio DuBois); Hagerstown 174 ACM Area 2 (Daniel Pappariella); Hagerstown 174 QA (Bukola Raji); Hagerstown 174 QA (Jade Jessop); Hagerstown 174 QA (Joanne Charles-Clarke); Hagerstown 174 QA (John E Hunt (Inherited)); Halethorpe 167 (Rebecca R Pettiford); Halethorpe 167 ACM Area 1 (Christine Bethea); Halethorpe 167 ACM Area 2 (Lanasia West); Halethorpe 167 QA (ALISON CONLEY); Halethorpe 167 QA (Robin K Doering); Haltom City 188 (Dante Williams); Haltom City 188 (Melissa J Chapman); Haltom City 188 ACM Area 1 (Marvin Tablante); Haltom City 188 ACM Area 2 (Dante Williams); Haltom City 188 ACM Area 2 (Robert G Wilson); Haltom City 188 QA (Daniel Vu); Haltom City 188 QA (Julie E Reynolds); Hamilton 199 (Jenna Evans); Hamilton 199 (Katanya Hall); Hamilton 199 (Kera Cathel); Hamilton 199 ACM Area 1 (Steve Hosang); Hamilton 199 ACM Area 2 (TaChita Robb); Hamilton 199 QA (Emily Norton); Hamilton 199 QA (Kera Cathel); Hamilton 494 (Derek Morner); Hamilton 494 ACM Area 1 (William Robinson); Hamilton 494 ACM Area 2 (Jessica Hoffman); Hamilton 494 ACM Area 2 (VERONICA HESTER); Hamilton 494 QA (Elizabeth E Galloway); Harlingen 185 (Victor Guevara); Harlingen 185 ACM Area 1 (Eduardo De La Rosa); Harlingen 185 ACM Area 2 (Melinda Garcia); Harlingen 185 QA (Audrey Rodriguez); Harlingen 185 QA (Jennifer Martinez); Harlingen 185 QA (Luis Rodr√≠guez); Harlingen 185 QA (Rosa E Mercado (Inherited)); Hattersheim Field Services (Frank Dauber); Hattersheim Field Services (Robert Rohs); Hazel Crest 116 (Andrea C Rice); Hazel Crest 116 (Morgan R Grose); Hazel Crest 116 ACM Area 1 (Ian Olson); Hazel Crest 116 ACM Area 2 (Bob Gentille); Hazel Crest 116 QA (Amanda E Swider); Hazel Crest 116 QA (Joshua D Williamson (Inherited)); Head , In-Licensed Products AsiaPac, Global Regulatory Affairs (Angela Wong); Head Global Business Developme (Eve Williamson); Head Greater China Logistics (Edwin Chia); Head of Asia Pac, Medical Affairs (Jane Leong); Head of Asia Pac, Medical Affairs (Jonathan Anderson); Head of Batch Release (Darren Moulton); Head of Batch Release (Sherrin Gribble); Head of Commercial and Marketing (Jamila Filipecki); Head of Medical Affairs UK (Mansoor Ashraf); Head of Medical Affairs UK (Sankarasubramanian Rajaram (Inherited)); Head of Medical Writing & Disclosures (Catherine Tyrrell); Head of Medical Writing & Disclosures (Cathy Tyrrell); Head of Operational Excellence (Dirk Crouse); Head of R&D Finance (Christopher James Thorpe); Head of Region, APAC & In licensed Prod, Glob RA (Lisa MacDonald); Head of Region, Asia Pac & In licensed Prod, Glob Reg Affairs (Lisa MacDonald); Head, Clinical Development Operations (Daniel Kirby); Head, Clinical Development Operations (Veronica Suarez); Head, Technical Development (PKV) & Global Process Innovation (Steven Rockman); Head, Technical Development (PKV) & Global Process Innovation (Steven Rockman); Health & Wellbeing (Susanne Marx); Health (Donna G O''Keefe); Health (Sara Regnier); Health and Safety (Gregory Dowler); Healthcare Policy & Ext. Affairs APAC (Shouqing Zhang); Healthcare Policy & Ext. Affairs Europe (Rdiger Gatermann); Healthcare Policy & Ext. Affairs Europe (R√ºdiger Gatermann); Healthcare Policy & Ext. Affairs Japan (Shouqing Zhang (Inherited)); Healthcare Policy & Ext. Affairs N.A. (Patrick Collins); Hem Higashi Nihon Area (Atsuhiko Arikata); Hem Kansai Chubu Area (Shinichi Kano); Hem Nishi Nihon Area (Taisuke Miyakoshi); Hem Nishi Nihon Area (Takeyuki Akiyoshi); Hem Shutoken Area (Takayuki Takigawa); Hematology & Thrombosis Marketing (John Nicastro); Hematology & Thrombosis Medical Affairs (Debbie Drane (Inherited)); Hematology & Thrombosis TA (Sharad Agrawal); Hematology & Thrombosis Therapeutic Area (Brahm Goldstein); Hematology Marketing (John Nicastro); Hematology TA (Antti Kourula); Hematology TA Marketing (Sharad Agrawal); Hematology TA Medical Affairs (Krupa Sivamurthy); Hemophilia A Marketing (Beth Ann Hirst); Hemophilia A Marketing (Brian Johnson); Hemophilia B Marketing (Nicole McInerney); Hemophilia Group (Hideyuki Seto); Hemophilia Group (Makoto Kubo); Hemophilia TA (Takayuki Ishii); Henderson 134 (Eddie H Gaillard); Henderson 134 ACM Area 1 (Maria Coulter); Henderson 134 ACM Area 2 (Eshell Cudjo-Williams); Henderson 134 QA (Bri Johnson); Henrico 264 (Tracia Lopez); Henrico 264 ACM Area 1 (Nancy L Ott); Henrico 264 ACM Area 1 (Tannika Green); Henrico 264 ACM Area 2 (Tannika Green); Henrico 264 ACM Area 2 (Tracia Lopez (Inherited)); Henrico QA 264 (Brandan Lurz); Herstellungsleiter (Andreas Gehrich (Inherited)); Herstellungsleiter (Annette Pernitzsch (Inherited)); Herstellungsleiter (Claudia Habenicht (Inherited)); Herstellungsleiter (Heike Borchert); Herstellungsleiter (Kirsten Scheibel (Inherited)); Herstellungsleiter (Natascha Bock (Inherited)); Herstellungsleiter (Stephani Keltsch); Herstellungsleiter (Sven Schuhmann (Inherited)); Herstellungsleiter Berlin (Dorothee Knop); Herstellungsleiter Braunschweig (Dorothee Knop); Herstellungsleiter Bremen (Dorothee Knop); Herstellungsleiter Frankfurt (Dorothee Knop); Herstellungsleiter Gttingen (Dorothee Knop); Herstellungsleiter G√∂ttingen (Dorothee Knop); Herstellungsleiter Kiel (Dorothee Knop); Herstellungsleiter Nrnberg (Dorothee Knop); Herstellungsleiter N√ºrnberg (Dorothee Knop); Hidalgo 151 (Howard Augusto Castillo); Hidalgo 151 ACM Area 1 (Javier De La Fuente (On Leave)); Hidalgo 151 ACM Area 1 (Javier De La Fuente); Hidalgo 151 ACM Area 2 (Lucio Jaramillo); Hidalgo 151 QA (Becky S Diaz); High Speed Packaging (Jrg Sthli); High Speed Packaging (J√ºrg St√§hli); High Speed Packaging Line (Peter Zysset); Highland Park 138 (Miriah Grady); Highland Park 138 (Mondel Hightower); Highland Park 138 ACM Area 1 (Miriah Grady); Highland Park 138 ACM Area 1 (T''Pring John); Highland Park 138 ACM Area 2 (Dee Freeman); Highland Park 138 QA (Jenae Jacobs); Highland Park 138 QA (Shawna Taylor); Highland Park 138 QA (Slater P Murphy); Hillsboro 126 (Elizabeth Manning); Hillsboro 126 ACM Area 1 (Alex Steinke); Hillsboro 126 ACM Area 2 (Dan Jordan (On Leave)); Hillsboro 126 ACM Area 2 (Paige N Zafran); Hillsboro 126 QA (Grant Haun); Hizentra Marketing (Michael Ward); Hobart 218 (Kevin Robinson); Hobart 218 (Sherri L Clark); Hobart 218 ACM Area 1 (Michele Tosseng); Hobart 218 ACM Area 2 (Ashley Myvett); Hobart 218 ACM Area 2 (Kevin Robinson); Hobart 218 QA (Drewleigha B Sarver (Inherited)); Hobart 218 QA (KayLeigh Northcutt); Hokkaido Tohoku Area (Masahiro Takai); Homestead 207 (Mary A Paul (Inherited)); Homestead 207 (Roger Jiron); Homestead 207 (Stacey Ewing); Homestead 207 ACM Area 1 (Monica Alvelay); Homestead 207 ACM Area 2 (Julio Delgado); Homestead 207 ACM Area 2 (Roger Jiron (Inherited)); Homestead 207 QA (Natasha Roopnarine); Homestead 250 (Diane Day); Homestead 250 ACM Area 1 (Ryan Olsavsky); Homestead 250 ACM Area 2 (Jamille Ford); Homestead 250 QA (DENNIS GINTHER); Houston 208 (Sara Bouras); Houston 143 (Josh Concepcion); Houston 143 ACM Area 1 (Sharon K Easiley); Houston 143 ACM Area 2 (Oscar Beasley); Houston 143 QA (Shawntrala Stephens); Houston 168 (Lisa Rojas); Houston 168 ACM Area 1 (Lisa Wilson); Houston 168 ACM Area 2 (Elizabeth Morales); Houston 168 ACM Area 2 (Tascha Montgomery); Houston 168 QA (Sam Schultz (Inherited)); Houston 168 QA (Tara West); Houston 208 (Sara Bouras); Houston 208 ACM Area 1 (Sara Bouras (Inherited)); Houston 208 ACM Area 1 (Sarah L Terry); Houston 208 ACM Area 2 (Marc Garcia); Houston 208 ACM Area 2 (Sarah L Terry); Houston 208 QA (Darriel Clark (On Leave)); Houston 208 QA (Darriel Clark); Houston 208 QA (Elaine R Wilson); Houston 209 (Erin Ostean); Houston 209 (Sheneka E Wilson); Houston 209 ACM Area 1 (Charles Minter (On Leave)); Houston 209 ACM Area 1 (Charles Minter); Houston 209 ACM Area 2 (Adrean N Brown); Houston 209 ACM Area 2 (MARY MEADOWS); Houston 209 QA (Barbara May); Houston 209 QA (Keva M Williams); Houston 274 (Brian T Edwards); Houston 274 ACM Area 1 (Reiko F Hernandez); Houston 274 ACM Area 2 (Tyriana T Shaw); Houston 274 QA (Lawrence Jones); Human Resources Labor and Employee Relations (Christine Adams (On Leave)); Human Resources & Communications (Sandro Krug); Human Resources & General Affairs, Japan (Akira Nakajima); Human Resources & General Affairs, Japan (Mayumi Gonome - ); Human Resources & General Affairs, Japan (Mayumi Gonome ??? ??? - ???? ????); Human Resources (Bonnie Shor); Human Resources (Bonnie Slone); Human Resources (Gyuri Endes); Human Resources (Jacqueline Hawkins); Human Resources (Nicole Bookert); Human Resources (Tanja Templer); Human Resources (Tanya Kennedy); Human Resources Kankakee (Jacqueline Hawkins); Human Resources Management (Adam Williams); Human Resources Management (Pia Daish); Human Resources Organisation Transformation (Paula Foord); Human Resources SC (Nicole Bookert); Human Resources Talent Development (Michael O''Connor); Human Resources ¬ñ Labor and Employee Relations (Christine Adams); Human Resources ¬ñ Labor and Employee Relations (Jacqueline Hawkins (Inherited)); Human Resources, China (Grace Deng ?????); Human Resources, China (Tracy Lyu ?????); Hygiene (Arno Karnholz (Inherited)); ICSR Compliance and Reconciliation (Samantha Gan); IG / Albumin Bulk (Anthony Manovella); IG / Albumin Bulk (Jill Allen); IG Lab (Ritaben Suhagiya (Inherited)); IG Lab (Ritaben Suhagiya); IG Lab (Tom McCallum); IG Lab (William Fenech); IM (Guido Kagemann); IM Modul (Arnd Vollmerhausen (Inherited)); IM Modul (Torsten Jeide); IMED / Clinical (Julia Daum); IP Management (Helen Mutimer); IP Management (Peter Gomme); IP Management (Philip Keep); IR Higashi Nihon Area (Takahiro Tsuruta); IR Kansai Chubu Area (Yutaka Fujita); IR Nishi Nihon Area (Takayuki Sakai); IR Shutoken Area (Hiroki Nagayasu); IS Applications (Markus Fhrer); IS Applications (Markus F√ºhrer); IS Operations (BAHA ATICI); IS Operations (Bernd Boucsein (Inherited)); IS Operations (Robert Rohs); IT (Martin Jones); IT Americas (Stephen Norman Bender); IT Americas (Steve Bender); IT Americas, Laboratories (Dave Kirk); IT Americas, Site IT (DEBORAH BUREC); IT Americas, Site IT (Deborah Burec); IT Applications (Pavan Dronamraju); IT Applications - SharePoint (Emma Tibble); IT Asia Pacific (Gavin Gusling); IT Automation BMW (Daud Warraich (Inherited)); IT Automation BMW (John Croxton (Inherited)); IT Automation BMW (John Croxton); IT Automation BMW (Reto Von Gunten); IT Automation BMW (Stephen Pickering); IT Business Applications (Paul Ashton); IT Communications Services (Nealesh Mistry); IT Compliance - EMEA (David Boyd); IT Compliance - Infrastructure (Neil Broster); IT EMEA (Martin Gallington); IT EMEA Infrastructure (Chris Gatley [C]); IT Infrastructure - Hosting (Sadha Venkatachellam); IT Infrastructure - Hosting (Sathasivan Venkatachellam); IT Infrastructure Service - Identity & Desktop (Rob Deacon); IT Infrastructure Services (Quentin Zhao); IT Security & Compliance (Bob DeMarco); IT Security & Compliance (Robert DeMarco); IT Security and Compliance (Alan Butterfield); IT Security and Compliance (Alan Matthew Butterfield); IT Service Management Office (Richard Williams); IT Services (Daniel Robinson); IT Vendor Contracts (Craig Skelton); IVV Bacteriology (Matthew Stellato); IVV Chemistry (Thuy Dang); IVV Environmental Monitoring (Andrea Chalker); IVV Potency + Biochemistry US (Corina Zahra); IVV Potency, Biochem Rest of World (Anna Gruszka); IVV Seed Development (Brad Dickson); Identity and Access Management Operations (Bill Keohane); Ig Marketing (Sara Cowan); Ig&API Franchise Marketing (Amlie De Rosnay) (52388632); Ig&API Franchise Marketing (Am√©lie De Rosnay) (52388632); Ig&API Franchise Marketing (Emmanuelle Massonie) (52388632); Ig&API Sales Force Florent Privat (Emeline Bedu) (52388634); Ig&API Sales Force Florent Privat (Florent Privat) (52388634); IgG & Albumin/Supply Chain PMR Main Site (Barbara Kalina (Inherited)); IgG & Albumin/Supply Chain PMR Main Site (Wilfried Freudenberg); IgLAB (Franz Petter); IgLAB Bulk formulation (Susanne Gilgen); IgLAB Bulk purification (Thomas Eckert); IgLAB MV&VI Bulk Formulation (Sandra Kaempfer); IgLAB MV&VI Bulk Purification (Mathias Schinegger); IgLAB MV&VI Subfractionation (Markus Hauert); IgLAB Subfractionation (Mark Deutschland); IgLAB Subfractionation (Markus Hauert); IgLAB Subfractionation (Susanne Gilgen); IgLAB Subfractionation (Thomas Daehler); IgLABMV&VI (Marius Liesch); IgPRO (Markus Weber); Immunoglobulin Asset (Fritz Rentsch); Immunohematology Lab (Maria R Fernandez (Inherited)); Immunohematology Lab (Peter A Fitzgerald (Inherited)); Immunology & Neurology CommDev Marketing (Michael Ward); Immunology & Neurology Medical Affairs (Andrew Koenig); Immunology & Neurology New Products (Regula Styger Baumann); Immunology & Neurology RDPM (Karen Lindquist); Immunology & Neurology RDPM I (Sabine Alexandra Stoffel Domig); Immunology & Neurology TA (Jay Bowsher); Immunology & Rare Disease Group (Hirokazu Imura); Immunology & Rare Disease Group (Shinichiro Magome); Immunology & Rare Disease TA (Takuya Ohshima); Immunology (Bernhard Czapla (Inherited)); Immunology (Claudia Bachmann (Inherited)); Immunology (Heinrich Feischen (Inherited)); Immunology (Helen Hartman); Immunology (Holger Milkereit (Inherited)); Immunology (IMM) (Stefan Spycher); Immunology (Michael Bernd Rode (Inherited)); Immunology (Rachpal Malhotra); Immunology (Rainer Adam (Inherited)); Immunology (Ralf Kosmol); Immunology (Susanne Mller (Inherited)); Immunology (Susanne M√∂ller (Inherited)); Immunology - Transplant (Mircea Ciuca); Immunology / Transplant (Mircea Ciuca); Immunology Lab (Maria R Fernandez (Inherited)); Immunology Lab (Peter A Fitzgerald (Inherited)); Immunology Marketing (Bernadine Koziara (Inherited)); Immunology Marketing (Biju Chorinchath); Immunology Marketing (JD Kohutka); Immunology New Products (Regula Styger Baumann); Immunology TA (Jay Bowsher); Immunology TA Marketing (Michael Ward); Immunology TA Medical Affairs I (Andrew Koenig); Immunology and Neurology TA (Sharon Popik); Immunology and Neurology TA (Susanne Wang); Immunology-North America (Ian Gourley); Import / Export Lead AU (Robert Bronikowski); Import Export Compliance (MICHAEL MORRILL); Import Export Compliance (Markus Buri); Import Export Compliance (Michael Morrill); Import-Export Compliance (Nona Clarke); Import/Export Compliance (Neda Nikolic); Import/Export Compliance (Nona Clarke); Impurity & Data Management (Patricia Lieby); Impurity Data Mngt I (Madlene von K√§nel); Impurity Data Mngt I (Simona Pfister); Impurity and Data Mngt II (Tanja Angela Nyffenegger); In Market Logistics EMEA (Avi Yuhay); In-Market Logistics Turkey/EEU (Avi Yuhay); Incoming Quality Assurance (Jamie Nichols); Incoming Quality Assurance (Lynette Mirrielees); Incoming Quality Assurance GL (Cindy Rocknowski (Inherited)); Incoming Quality Assurance GL (Jeffrey Zoubek (Inherited)); Indianapolis 146 (Brian W Stewart); Indianapolis 146 (Randy Miller); Indianapolis 146 QA (Erik Tharp); Indianapolis 146 QA (Randy Miller); Indianapolis 181 (Jami Colson); Indianapolis 181 ACM Area 1 (Dayneisha G Pinkston); Indianapolis 181 ACM Area 1 (Jordan Swoape); Indianapolis 181 ACM Area 2 (Ronnisha Banks); Indianapolis 181 QA (Aja Blue); Indianapolis 181 QA (Drewleigha B Sarver); Indianapolis 181 QA (Robin L Oldaker); Indianapolis 412 (LaToya M Hinkle); Indianapolis 412 ACM Area 1 (Brian Stewart (On Leave)); Indianapolis 412 ACM Area 1 (Brian Stewart); Indianapolis 412 ACM Area 2 (Latoria J Moore); Indianapolis 412 QA (Ashley Kemper); Indirect Procurement (Daniela Ebert); Industriekaufleute (Carmen Walldorf (Inherited)); Industriekaufleute (Doris Nake (Inherited)); Indy 146 ACM Area 1 (Sara K Campbell); Indy 146 ACM Area 1 (Sara K Sheets); Indy 146 ACM Area 2 (Joe Hicks Jr); Influenza (Chris Clarke); Influenza Operations (Bill Cracknell); Influenza Vaccines (Carole Verhoeven); Influenza Vaccines Seasonal (Athanasia Papadimitriou); Influenza Vaccines Seasonal (Jonathan Edelman (Inherited)); Influenza and National Products, Global RA (Lisa Steinberg); Information Security (Federico Iaschi); Infrastructure Design (Jeremy Finlay); Infrastructure Excellence & Process Management (Stephan Krummel); Infrastructure Program Manager (Jessica Bartels); Infrastructure Program Mgr (Jessica Bartels); Infusion Science - ISS (Lisa Barrett); Infusion Science - ISS (Lisa Barrett); Inhibitors, FI, FXIII & Support/Supply C (Barbara Kalina (Inherited)); Inhibitors, FI, FXIII & Support/Supply C (Wilfried Happel); Innovation (Becky Heatherman); Inoculation (Jubail Dimabuyu); Inspection & Packaging (Jonathan Kanczes); Inspection & Packing (Ben Hagger); Inspection & Packing (David Nguyen); Inspection & Packing (Joanna Madafferi (Inherited)); Inspection & Packing (Joanna Madafferi); Inspection (Pasquale Carestia (Inherited)); Inspection (Thomas Royal); Inspection (Union) (Pasquale Carestia (Inherited)); Inspection (Union) (Thomas Royal (Inherited)); Inspection semi final prod. 4 (Samira Spahn-Belbaita); Instrum & Elect Engineer (Justin Lim); Instrumentation (Colin Steele); Integrated Business Planning (Avi Gor√©); Integrated Business Planning (Avinash Gor); Integrated Business Planning (Avinash Gor√©); Integrated Business Planning (Jamie Pritzl); Intercontinental Supply Chain (Oliver Wehner); Internal Communications (Claudine Heinz); Internal Communications (Jasmin Joller); Internal Communications (Laura Kumpe); Internal Services (Reto Moser); Internal processes (Ernst Scheurer); International Logistics - Intermediates, Special Shipments (Julia Daum); International Logistics - Team Americas - APAC (Anna-Karina Muth); International Logistics - Team Americas / APAC (Anna-Karina Muth); International Logistics - Team EMEA (Christoph Mueller); International Payroll (Clair Burke); International Plasma Operations (Jeffrey A Schulz); Interns (Jacqueline Hawkins (Inherited)); Investigation & Process Owners (Ryan Cox); Investor Relations (Mark Dehring); Invoice Control & Invoicing of Services (Harald Bieker (On Leave), Beatrix Gnau); Invoice Control & Invoicing of Services (Harald Bieker); Irondequoit 246 (Sheilah Mykins); Irondequoit 246 ACM Area 1 (Nicole Chipembere); Irondequoit 246 ACM Area 2 (Teresa Moreira-Weil); Irondequoit 246 QA (Meghan Beckedorf); Italian Commercial Finance (Laura Lucaroni); JPN TA (Coagulation) (Yuki Hidaka); JPN TA (Critical Care) (Osamu Tsukamoto); JPN TA (Immunology) (Satoshi Koike ??? ? - ??? ???? (Inherited)); JPN TA (Immunology) (Tomomi Shibata); JPN TA (Osamu Tsukamoto); Jackson 156 (Chris Weary); Jackson 156 (Jose L Dela Garza (Inherited)); Jackson 156 ACM Area 1 (Chris Weary); Jackson 156 ACM Area 1 (Joseph Dupree); Jackson 156 ACM Area 2 (Adrian Johnson); Jackson 156 QA (Bonnie M Talbott (Inherited)); Jackson 156 QA (Cynthia Hill); Jackson 156 QA (Jose L Dela Garza (Inherited)); Jackson 156 QA (Savannah Vann); Jackson 205 (Mark Bundy); Jackson 205 ACM Area 1 (Erica R Smith); Jackson 205 ACM Area 2 (Kenny Berry); Jackson 205 QA (Marc D Fisher); Jackson 205 QA (Nicole Pichla (On Leave)); Jackson 205 QA (Nicole Pichla); Jackson 225 (Bonnie M Talbott (Inherited)); Jackson 225 (Cherita Saulmarshall); Jackson 225 (Jai Baylis); Jackson 225 (Kronnetra Hester); Jackson 225 ACM Area 1 (Mariyo Archie); Jackson 225 ACM Area 2 (Jose L Dela Garza); Jackson 225 ACM Area 2 (Stanley Taylor); Jackson 225 QA (Deborah L Baker); Jackson 225 QA (Keyauna Lewis); Jackson 257 (Sarah E Silva); Jackson 257 ACM Area 1 (Caitie Golubski); Jackson 257 ACM Area 2 (Jarrett Heathcock); Jackson 257 ACM Area 2 (Sarah E Silva (Inherited)); Jackson 257 QA (Brooke McKinney); Jacksonville 251 (Sherri L Clark); Jacksonville 251 ACM Area 1 (Gina Castellano); Jacksonville 251 ACM Area 2 (AlexZandria Taylor); Jacksonville 251 QA (Brett A Wintheiser (Inherited)); Jacksonville 251 QA (Cindy Vieira); Japan Clinical Safety & Pharmacovigilance (Mariko Hase); Japan Field Services (Satoru Shimizu); Japan Project Management (Midori Kobayashi); Johnston 242 (Catherine Colucci); Johnston 242 (John L Thixton); Johnston 242 (Renee Keyser); Johnston 242 ACM Area 1 (Son Nguyen); Johnston 242 ACM Area 2 (Cessa Piedra); Johnston 242 QA (Allante S Williams); Johnston 242 QA (Erin Thompson); Joliet 219 (Andrew Franzen); Joliet 219 (Christopher J Rivers Jr); Joliet 219 ACM Area 1 (Sharon Kunz); Joliet 219 ACM Area 2 (Duanita Scott); Joliet 219 QA (Beth Majewski); Joliet 219 QA (Lori Carlson (Inherited)); Joliet 219 QA (Ryan Welter); Jonesboro 120 (Maurice E Clements); Jonesboro 120 ACM Area 1 (Jumela S Bell); Jonesboro 120 ACM Area 1 (Sade Hodges); Jonesboro 120 ACM Area 2 (Denise Bloodsaw); Jonesboro 120 ACM Area 2 (Jumela S Bell); Jonesboro 120 QA (Laila Matthews-El); Jonesboro 120 QA (Rose-Marie O Bland); K-C Fractionation (Union) (Jason Vaughn); K-C Fractionation (Union) (Samuel Jackson); KAN Security (Adam Kennell); KC Module 3 Operational Readiness (Cornelis Rijneveld); KOP Corporate Services (Michael Hays (Inherited)); KOP Corporate Services (Wendy Kilp) (Wendy Kilp); KOP Facilities (Michael Hays); KOP Outsourcing (Melissa Hurst); KOP Security (Shanna Aldridge); KOP Sourcing (Ed Rosario); KOP Sourcing (Paul Addis (Inherited)); Kankakee Field Services (Rebecca Liehr); Kankakee Manufacturing (Ernest Shepard); Kankakee R&D Tech Transfer (Shannon Boudreau); Kansai Area (Shingo Fujiwara); Kansai Area (Tatsuto Aihara); Kansas City 011 (Cristina E Ceniceros); Kansas City 011 (Tina Wagenknecht); Kansas City 011 ACM Area 1 (Dustin Irish); Kansas City 011 ACM Area 2 (Cristina E Ceniceros); Kansas City 011 ACM Area 2 (Samuel Jordan); Kansas City 011 QA (Cole D Kimple (Inherited)); Kansas City 011 QA (Samuel Anderson); Kansas City 011 QA (Whitney A Dean); Kansas City 410 (Cristina E Ceniceros); Kansas City 410 (Tina Wagenknecht); Kansas City 410 (Trethan R Copeland); Kansas City 410 ACM Area 1 (Jackie Florez); Kansas City 410 ACM Area 2 (Trethan R Copeland); Kansas City 410 QA (Kimberly S Mangold); Kansas City 410 QA (Whitney A Dean); Kaufmann f√ºr B√ºrokommunikation (Doris Nake (Inherited)); Kcentra Marketing (John Nicastro); Kcentra Marketing (Tom Groeling (Inherited)); Kcentra Marketing Group (Shunsuke Kuwata ??? ?? - ??? ??????); Kcentra Marketing I (Sara Cowan); Kenner 149 (Chris Weary); Kenner 149 (Durrell Arceneaux); Kenner 149 (Michael Markey); Kenner 149 ACM Area 1 (Brittany Miles); Kenner 149 ACM Area 2 (Teresa Currence); Kenner 149 QA (Centrell J Jackson); Kent 112 (David M Wilson); Kent 112 (Diana H Ek); Kent 112 ACM Area 1 (Diana H Ek (Inherited)); Kent 112 ACM Area 1 (Trevor Case); Kent 112 ACM Area 2 (Wesley Noble); Kent 112 QA (Brian Patterson); Kent 112 QA (Robert D Coulter); Kent 112 QA (Sasima Teadwatanasuk); Kent 160 (Michael J Ryan); Kent 160 ACM Area 1 (Brandy M Cermak); Kent 160 ACM Area 2 (Bambi C Gonwa); Kent 160 QA (Jamie L Dunderman); Kent 160 QA (Jamie L Matheney); Key Account Management (Alexander Kahlau); Key Account Management (Shun Huang ); Key Account Management (Shun Huang ????); King of Prussia Field Services (Cheryl Fennell); King of Prussia Field Services (Joy Holland); King of Prussia Field Services (Mary Jane McPherson (Inherited)); King of Prussia Quality (Brian Puglisi); Kitakanto Shinetsu Area (Hideo Yonesaka); Knowledge Management (Jacqui Altman); Knowledge Management (Kim Vandenberg); Knowledge Management (Leanne Cummings); Knoxville 405 (Brianna E Ballew); Knoxville 405 (John W Kelly); Knoxville 405 (Keith Clemons (Inherited)); Knoxville 405 ACM Area 1 (Michael R Thomas); Knoxville 405 ACM Area 2 (Leighann N Miller); Knoxville 405 QA (Tina G Ledbetter); Knoxville 405 QA (Tina Grubb Ledbetter); Kommunikation (Stephanie Fuchs); Konzessionen/Brandschutzbeauftragter (Michael Knoll (On Leave)); Konzessionen/Brandschutzbeauftragter (Michael Knoll); Koordination und Dokumentation (Rainer Frank (Inherited)); Kyushu Okinawa Area (Akihiro Enomoto); L&D, Apprentices KV (Ruth Schmid); LATAM RA (Andrea Violante); LATAM Sales Ops CAM & CAR (Mariano Miri); LVP Sterility Assurance (Sara Kimmins); La Crosse 516 (Ranee Bloor); La Crosse 516 QA (Sara Martin); Lab Automation (Ann L Wickenheiser); Lab Facilities (Joel Jones); Lab Inventory (Joel Jones (Inherited)); Lab Operations (Diep Chau); Lab Operations, Bio21 (Kirsten Edwards); Labor Relations (Steven Stewart); Laboratory Management Pasadena (Anthony Navarro); Laboratory Management - Pasadena (Anthony Navarro); Laboratory Operations (Constance W Farrar); Laboratory Operations (Marleen Enthoven); Laboratory Operations (Ricky R Alexander); Laboratory Systems (Amit Krishna); Lackawanna 238 (Martin Szczublewski); Lackawanna 238 ACM Area 1 (Brent Hollingsworth); Lackawanna 238 ACM Area 2 (Allie Tuttle); Lackawanna 238 QA (Anita Brenon); Lackland 706 (Ovetta A Mickles); Lackland 706 ACM Area 1 (Gabriel J Martinez); Lackland 706 ACM Area 2 (Ariel Schiller); Lackland 706 ACM Area 3 (Nate Neal II); Lackland 706 QA (Amber Sanders); Lackland 706 QA (Brenton Ferguson); Lager/Ersatzteilmanagement (Leon Krupa); Lakeland 154 (Elizabeth Adkins); Lakeland 154 ACM Area 1 (Jeffrey Simmons); Lakeland 154 ACM Area 2 (Bralyn T McCullough); Lakeland 154 QA (Crystal L Reichard); Lakeland 154 QA (Matthew Smith (Inherited)); Lakeland 154 QA (William Forquignon); Lansing 042 (Debbie L Duhe); Lansing 042 ACM Area 1 (Elizabeth Lawhon); Lansing 042 ACM Area 2 (Ruth A Griffin); Lansing 042 QA (Christine M Leija); Lansing 118 (Angie K Fedewa); Lansing 118 ACM Area 1 (Douglas Fiedler); Lansing 118 ACM Area 2 (Toussaint Hodari); Lansing 118 QA (Jessica Babcock); Las Cruces 506 (Samuel V Grijalva); Las Cruces 506 ACM Area 1 (Jacquelyn Jaques); Las Cruces 506 ACM Area 2 (Ira Bacani); Las Cruces 506 QA (Linda Dutchover); Las Vegas 081 (Jolena Lee); Las Vegas 081 (Michele Purvines-Honzo); Las Vegas 081 ACM Area 1 (Austin Vinson); Las Vegas 081 ACM Area 2 (Kevin Wallace); Las Vegas 081 ACM Area 3 (Christian Marcus); Las Vegas 081 QA (Erica Wiley); Las Vegas 081 QA (Paul Warden (Inherited)); Las Vegas 081 QA (Yaritza Monarrez); Las Vegas 172 (TIM AVILA); Las Vegas 172 (Xang Vang); Las Vegas 172 ACM Area 1 (Lashay Anter); Las Vegas 172 ACM Area 1 (Sarah C Sweat); Las Vegas 172 ACM Area 2 (Jessica L Jabbora); Las Vegas 172 QA (ANGELICA WILSON); Las Vegas 172 QA (Aaron D Learn); Las Vegas 216 (Erica Wiley); Las Vegas 216 (Nicole M Loncon); Las Vegas 216 ACM Area 1 (Erica Wiley); Las Vegas 216 ACM Area 1 (Michael Dako); Las Vegas 216 ACM Area 2 (Erica Wiley); Las Vegas 216 ACM Area 2 (Jose D Garcia); Las Vegas 216 QA (Orlando R Edwards Jr); Las Vegas 501 (Cari N Howard); Las Vegas 501 ACM Area 1 (Lissa Elswick); Las Vegas 501 ACM Area 2 (Steven G Simpson); Las Vegas 501 QA (Miranda Banks); LatAm Supply Chain (Martin Rossini); Late DSP Development (Erik Hinze); Late DSP Development (Tobias Brandt); Late Stage DSP Development (Erik Hinze); Late Stage DSP Development (LDD) (Uwe Liebing); Late Stage DSP Development (Tobias Brandt); Late Stage DSP Development (Uwe Liebing (Inherited)); Late USP Development (Jasmine Roth); Latin America (Juan Feliu); Latin American Distributors (Jean-Claude Andr); Latin American Distributors (Jean-Claude Andr√©); Lawrence 012 (Amy L Jackson); Lawrence 012 (Cole D Kimple (Inherited)); Lawrence 012 (Jessey Johnson); Lawrence 012 ACM Area 1 (Jessey Johnson (Inherited)); Lawrence 012 ACM Area 1 (Laura Hassen); Lawrence 012 ACM Area 2 (Taniesha D Kopriva); Lawrence 012 QA (Adam Loop); Lawrence 012 QA (Jessey Johnson (On Leave)); Lawrence 012 QA (Jessey Johnson); Lawrenceville 186 (Domonique T Walker); Lawrenceville 186 ACM Area 1 (Jeffrey Toussaint); Lawrenceville 186 ACM Area 2 (Ahesha M Francis); Lawrenceville 186 QA (Brandon Bailey); Lawton 452 (Natalie Compher); Lawton 452 (Vicky Sablan (On Leave)); Lawton 452 (Vicky Sablan); Lawton 452 ACM Area 1 (Jace A Guthrie); Lawton 452 ACM Area 2 (Samuel Jones); Lawton 452 QA (Adam Loop); Lawton 452 QA (Tiffany N Oxley); Layout & Packaging Planning (Martina Schweyer); Lead BP Finance - Asia Pac (Brendan Safe); Lead Clinical Oversight Manager (Anja Br√§unlich (Inherited)); Learning and Development (Amy Jackson); Learning and Development (Ann Lescher); Learning and Development (Henry Docx); Learning and Development (Karen A Emord); Learning and Development I (Henry Docx); Legal (Christine Dragann); Legal (Melissa Merriweather); Legal - Americas (John Neff); Legal - Australia (Amy Demediuk); Legal - Australia (Fiona Mead); Legal - Australia (Patrick Brady); Legal - Australia (Phyllis Perkins); Legal - Australia (Raewynn McIntyre); Legal - Australia (Tom Reid); Legal 1 (Khalil Rogers); Legal Clinical (Brian Sharma); Legal Counsel, Commercial, North America (Michael O''Connor); Legal Department APAC (Mae Chen ); Legal Department APAC (Mae Chen ?????); Legal Department Bern (Niklaus Kraehenbuehl); Legal Department Bern (Philippe Mueller); Legal Department Marburg (Dennis Kraft); Legal Operations Europe, Asia, Intercon. (Gereon Backmann); Legal Partners (Antje Michel); Legal Services (Sam Benyamin); Legal Services Europe & ICO (Charlotte Tvede Andersen); Legal Support One Commercial Operations Europe (Gereon Backmann (Inherited)); Legal Support One Commercial Operations Europe (Gereon Franz-Josef Backmann (Inherited)); Legal Support One Commercial Operations Europe (Gereon Franz-Josef Backmann); Legal ZLB Plasma (located in KOP) (Eric Silberstein); Lengnau Administration & Office Management (Boris Lanoir (Inherited)); Lengnau Administration & Office Management (Natasha Jackson); Lengnau Amenities Support (Franz Renfer); Lengnau Business Operations Services (Guenther Baumgartner); Lengnau Execution Systems (Frank Mastellone); Lengnau Facility Project (Darren Vegara); Lengnau Facility Project (Paul Loxley); Lengnau Human Resources (Ece Ergin [C]); Lengnau Human Resources (Sandro Krug (Inherited)); Lengnau Program (Nina Walser); Lengnau Program (Urs Meyer); Lengnau Project Documentation (Anamaria Negura); Lengnau Project Documentation (Mairead Henry [C]); Lengnau Project Documentation (Thorsten Buergel [C]); Lengnau SC and IBP (Marco Restelli); Lernende Logistik (Silvio Beck); Lexington 053 (Bobby R Fields Jr); Lexington 053 (Morgan R Grose); Lexington 053 ACM Area 1 (Jamale R Gentry); Lexington 053 ACM Area 2 (A.J. Stevenson); Lexington 053 QA (Michele R Estepp); Lexington 404 (Chris Otto); Lexington 404 ACM Area 1 (Ben Jones); Lexington 404 ACM Area 2 (Nathan J Fox); Lexington 404 QA (April Tyler); Lexington 404 QA (Bailee E White); Lichuan Plasma Collection Center (Jun Lai ); Lichuan Plasma Collection Center (Jun Lai ????); Lifecycle Management (Kathrin Eichstdt); Lifecycle Management (Kathrin Eichst√§dt); Lincoln Park 101 (Miriah Grady); Lincoln Park 101 (Toni M Walden); Lincoln Park 101 ACM Area 1 (Jeanette M Love-Ellison); Lincoln Park 101 ACM Area 2 (Dion J Holland); Lincoln Park 101 QA (Jenae Beacham); Lincoln Park 101 QA (Latosha Y Floyd (Inherited)); Lincoln Park 101 QA (Remie T Ray); Linden 212 (Jennifer Luque); Linden 212 ACM Area 1 (Jennifer Luque (Inherited)); Linden 212 ACM Area 1 (Matthew Clayborn); Linden 212 ACM Area 2 (Paul Eatman); Linden 212 QA (Jaleeka Johnson); Linden 212 QA (Stephanie D Shah (Inherited)); Linden 212 QA (Wendy MacConnell); Little Rock 234 (Seth Stuerke); Little Rock 234 ACM Area 1 (Charlie Hollinquest-Ford); Little Rock 234 ACM Area 2 (Ben Kulpa); Little Rock 234 QA (Akira Crenshaw); Logisitcs Manager VIC 266 (John Turone (Inherited)); Logistics (Angela Schembri); Logistics (Brendan Xerri); Logistics (Carl Werner (Inherited)); Logistics (Christopher Pela Fuaiva''a); Logistics (Dalal Mikhaeel); Logistics (Ibs Kaygisiz); Logistics (Ljubinka Duzel); Logistics (Peter Trimcevski); Logistics (Sam Mekhael (Inherited)); Logistics (Sam Mekhael); Logistics (Sebastian Sarmiento); Logistics (Tracy McIntosh); Logistics - Purchasing (Benjamin Fruin); Logistics - Purchasing (CHERYL GOODWIN); Logistics - Purchasing (Sue Savage); Logistics APAC (Edwin Chia); Logistics Customer Group (Namiko Hirakawa); Logistics I (Harald Mller (Inherited)); Logistics I (Harald M√ºller (Inherited)); Logistics Operations (Kai Menz); Logistics Operations (Koji Sugihara - ); Logistics Operations (Koji Sugihara ??? ?? - ???? ????); Logistics Operations - LATAM (Bruno Arakaki); Logistics Operations Australia (Suzanne Johnson); Logistics Operations Customer Service (Kaye McConnell); Logistics Operations Customer Service (Tanja Wells); Logistics Operations Europe (Matthias Loth); Logistics Operations Manager (John Turone); Logistics Operations- Americas (Daniel Sweed); Logistics Operations- Americas (Marianne McDonald (Inherited)); Logistics Planning Group (Takayuki Kato); Longwood 195 (Annette Nelson); Longwood 195 (Brian D Kelly); Longwood 195 ACM Area 1 (Jenna Smith); Longwood 195 ACM Area 1 (Vincent Spencer); Longwood 195 ACM Area 2 (Jessica Greene (On Leave)); Longwood 195 ACM Area 2 (Lori B Warfield); Longwood 195 QA (Brian Murzycki); Longwood 195 QA (Christopher Davis); Longwood 195 QA (John Garrett); Look Back / PDI (Julia Schimansky); Louisville 054 (Susan D Bensing); Louisville 054 ACM Area 1 (Tish Farris); Louisville 054 ACM Area 2 (Heather Romines); Louisville 054 QA (Gary Loy II); Louisville 054 QA (Keith Clemons (Inherited)); Louisville 054 QA (Melissa Casaus); Louisville 054 QA (Melissa J Roberts); Luotian Clinical Inspection (Yongmin Lv ?????); Luotian Inspection Management (Zengyi Chen ?????); Luotian Inspection Professional (Jiwei Liu ); Luotian Inspection Professional (Jiwei Liu ?????); Luotian Inspection Professional (Zheng Liang ????); Luotian Office Administration (Xiaoping Tang ?????); Luotian Office Administration (Zhen Zhang ); Luotian Office Administration (Zhen Zhang ????); Luotian Plasma Center Quality (Lixia He ?????); Luotian Plasma Collect (Jiali Fan ); Luotian Plasma Collect (Jiali Fan ?????); Luotian Plasma Collection (Meng Zhou ????); Luotian Plasma Collection (Menghua Ye (Inherited)); Luotian Plasma Collection (Shuiqiao Xiao ?????); Luotian Plasma Collection Center (Cunwei Hou ?????); Luotian Plasma Collection Center (Xiaoping Tang ); Luotian Plasma Collection Center (Xiaoping Tang ?????); Luotian Plasma Sourcing (Xiaoling Wang ); Luotian Plasma Sourcing (Xiaoling Wang ?????); Luotian Plasma Sourcing Management (Caihong Cheng ?????); Luotian Quality Management (Menghua Ye ); Luotian Quality Management (Menghua Ye ?????); Luotian Quality Management (Zheng Liang ????); Luotian plasma source management (Yongmin Lv ); Luotian plasma source management (Yongmin Lv ?????); Lyophilization (Jean-Claude Cauderay); M99 (Guido Mller); M99 (Guido M√∂ller); M99 (Marius Liesch); M99 NVI (Michael Theilkaes); M99 VVI (Marcel Mosimann); MDM Operations (Chandra Karpuram); MES & Systems (Reto Von Gunten); MES Automation (Gary Steele); MES Koordination (Horst Boeder (Inherited)); MES Koordination (Ralf Dersch); MF-59 (Gerhard Seemann (Inherited)); MFG Berinert & Beriplex Production (Jonathan Signore); MFG Berinert & Beriplex Production (Union) (Jonathan Signore); MS&T Lead (Kevin Murphy); MS&T Liverpool (Lisa-Marie Foulkes); MSAT (Matthias Kaeser); MSL Manager (Claire Morgan); MST Labor 1 (Anne N√∂ll); Macon 233 (Keyonna L Gray); Macon 233 (Lori B Warfield (On Leave)); Macon 233 (Melodee C Ebel (Inherited)); Macon 233 (Sherri L Clark); Macon 233 ACM Area 1 (Jennie Miles); Macon 233 ACM Area 1 (Lori B Warfield (On Leave) (Inherited)); Macon 233 ACM Area 2 (Gina Castellano); Macon 233 ACM Area 2 (Tomecia Tillman); Macon 233 QA (Teddye Gandy (On Leave)); Macon 233 QA (Teddye Gandy); Madison 076 (Tiffany K Singh); Madison 076 ACM Area 1 (Shelby N Grimsley); Madison 076 ACM Area 2 (Jada Phillips); Madison 076 QA (Alissa Elke); Madison 076 QA (Iricka Williams); Madison 076 QA (Prim J Cunningham (Inherited)); Main (Elizabeth Boyd); Maintenance & Reliability (Michael Elmer); Maintenance & Utilities (Franz Arnold Nigsch); Maintenance (Jeffrey Rhodes); Maintenance (Michael J Stephenson); Maintenance (Michael Memenga); Maintenance (Union) (Jose Franceschini Mirabal (Inherited)); Maintenance (Union) (Michael Memenga (Inherited)); Maintenance Engineering (Vittorio D''Argento); Maintenance K3 (Jose Franceschini Mirabal (Inherited)); Maintenance K3 I (Michael J Stephenson); Maintenance Officer (Jesse Chen); Maintenance Officer (Ray Belli); Maintenance Operations (Vittorio D''Argento); Maintenance SC I (Jeffrey Rhodes); Maintenance Support Engineer (James Stevens); Maintenance U8 (Simon Urfer); Maintenance U8 (Stefan Bgli); Maintenance U8 (Stefan B√∂gli); Maintenance and Utilities (Jose Franceschini Mirabal); Major Capital Projects (Brian Price); Management Accounting (Daya Salter); Management Accounting (RYAN HANSEN); Manager , IVV Seed Development (Karen Laurie); Manager - QA Batch Release (Linda Curran); Manager - QA Batch Release (Sherrin Gribble); Manager DGL (Heike Gocht); Manager ICA (Tim Karla); Manager IT Applications BPCS (Rod Randall); Manager Performance Qualification (Aaron Haag); Manager QA Batch Release (Anthony Day); Manager QA Batch Release (Carol Youssef); Manager QA Batch Release (Olivia Fisher); Manager QA Capability (Mark Machowicz); Manager QA Capability (Nicole Schaefer); Manager QA Capability (Vicky Gakias); Manager, DS Processing (Jesse Bodle); Manager, Field Services Australia (Bec Heitbaum); Manager, Field Services Australia (Travis Slessar); Manager, QA Cont Imp & Iss Mgt (Christopher Burke); Manager, QA Cont Imp & Iss Mgt (Janet Drew); Manager, QA Cont Imp & Iss Mgt (Jeremiah Holden); Manager, Virol & Immuno Res (Erin Verity); Manf Dir - Influenza Vaccines (Paul Morrison); Manf Dir - Influenza Vaccines (Vincent Chung); Manhattan 019 (Stacy J Teske); Manhattan 019 ACM Area 1 (Shane A Groover); Manhattan 019 ACM Area 2 (Dave Lynn); Manhattan 019 ACM Area 2 (Stacy J Teske (Inherited)); Manhattan 019 QA (Karen L Phillips); Manufacturing (Barbara Beugger); Manufacturing (Boris Lanoir); Manufacturing (Bradley J Eberhart); Manufacturing (James Janachowski); Manufacturing (Jose Gonzalez (Inherited)); Manufacturing (Katie Wood); Manufacturing (Martin Schaeren (Inherited)); Manufacturing (Matthew Seay); Manufacturing (Patricia Stewart (Inherited)); Manufacturing (Rene Bruegger); Manufacturing - Fill/Finish (Vincent Chung); Manufacturing A1 (Danica Bates); Manufacturing B1 (Trinette Farr); Manufacturing B2 (Michael Haney); Manufacturing Continuous Improvement (Trinette Farr); Manufacturing EU & APAC (Pierre Caloz); Manufacturing Engineering (Aaron Imig); Manufacturing Execution Systems (Frank Behnisch); Manufacturing Finance (Jacob Weaver); Manufacturing Finance (Jason Mugridge); Manufacturing First Shift (Tish Smith); Manufacturing HS (Chad M Salisbury); Manufacturing HS (Dave Sehgal); Manufacturing HS (Karen Netherton); Manufacturing Kankakee I (Jose Gonzalez (Inherited)); Manufacturing LVP (Jonah Smith); Manufacturing LVP (Nige Hilton); Manufacturing Liverpool (Jonah Smith); Manufacturing Operations (Steven Aldrich); Manufacturing PKV (Chris Larkins (Inherited)); Manufacturing PKV (Jonah Smith); Manufacturing Quality Management (Ramzan Tabasum); Manufacturing SC I (Matthew Seay); Manufacturing Science & Technology (Klaus Schmitt); Manufacturing Science & Technology (Klaus-Jrgen Schlitt); Manufacturing Science & Technology (Klaus-J√ºrgen Schlitt); Manufacturing Sciences and Technologies (Heidi Bergman); Manufacturing Second Shift (Michael Haney); Manufacturing Supply Chain & Integrated Business Planning (Pat Golla); Manufacturing Support (Clare Hughes); Manufacturing Support (Dee Hamer); Manufacturing Support (Marco Restelli); Manufacturing Support (Vreni F√∂rtsch); Manufacturing Technical Operations Team (Yuan Su ); Manufacturing Technical Operations Team (Yuan Su ????); Manufacturing Technology & Science (Christoph Hau√ümann); Manufacturing Third Shift (Michael Haney); Maple Shade 215 (Brett Goldman); Maple Shade 215 (Darryl King); Maple Shade 215 ACM Area 1 (Tracey Pinkney); Maple Shade 215 ACM Area 2 (Erica Hoesly); Maple Shade 215 QA (Deb Stith); Maple Shade 215 QA (Kimberly Perry); Marburg Data Management (Babette Katharina von Hagen); Marburg Field Services (Alexander Berendes); Margate 142 (Christina M Kokoszka); Margate 142 (Michelle S DeCambre); Margate 142 (Takisha F Jackson); Margate 142 ACM Area 1 (Amanda Bybee); Margate 142 ACM Area 1 (Kurt S Tuckett); Margate 142 ACM Area 2 (Kencia Cadet-Pa?ko); Margate 142 ACM Area 2 (Kencia Cadet-Pako); Margate 142 QA (Estela M Euceda); Margate 142 QA (Karen Blanchard-Sims); Market Access & Public Health Netherlands (Els Devriese); Market Access (Debbie Drane (Inherited)); Market Access France (Alice MATHERON); Market Access France (Franck Puget (Inherited)); Market Access GE/AT/Emerg. Europe (Dirk Hoheisel (Inherited)); Market Access GE/AT/Emerg. Europe (Ilona Krug); Market Access Italy (Lara Pippo); Market Access Russia & CIS (Batyrkhan Kuatov); Market Access Strategy (Robert Rouse); Market Access and Commercial Strategy (Ling Yang ????); Market Access and Public Affairs (Jonathan Galduf Cabanas); Market Access and Public Affairs (Jose Luis Moreno Sanchez); Market Access and Public Affairs (Sandra Santos); Market Research (Nathan Barrall); Market Research (Venkatesh Ramakrishnan (Inherited)); Marketing & Medical Affairs Interconti. (Thomas Hauck); Marketing (Brian McMaster); Marketing (Dariusz Holdys); Marketing (Elena Glukhova); Marketing (Michael Chen ?????); Marketing (Philippe Hebert (Inherited)); Marketing (Robert Mitchell); Marketing (Scott Newkirk); Marketing (Thomas Offergeld); Marketing Belgium (Marijke Maes); Marketing Benelux (Erwin Franken); Marketing Benelux (George Peters); Marketing Benelux (Patrick Reygaert); Marketing Benelux (Stefaan Schatteman [C]); Marketing Coagulation (Marino Bertapelle); Marketing Coagulation (Sharad Agrawal); Marketing Communication (Anastasia Walsh); Marketing Department (Marianna Konstantinidi (Inherited)); Marketing Division (Jean-Marc Morange (Inherited)); Marketing Division (Kyota Yamaoka - ); Marketing Division (Kyota Yamaoka ??? ?? - ???? ?????); Marketing Division Congress Group (Kyota Yamaoka - (Inherited)); Marketing Division Congress Group (Kyota Yamaoka ??? ?? - ???? ????? (Inherited)); Marketing Division Critical Care and Acquired Bleeding (Shunsuke Kuwata - ); Marketing Division Critical Care and Acquired Bleeding (Shunsuke Kuwata ??? ?? - ??? ??????); Marketing Division Hemophilia Group (Makoto Kubo); Marketing Division Hemophilia Group (Sho Sakuma ???? ? - ??? ????); Marketing Division Immunology & Rare Disease Group (Shinichiro Magome); Marketing Division SID Group (Jun Ishiwa - ); Marketing Division SID Group (Jun Ishiwa ??? ? - ??? ????); Marketing France (Benjamin BISMUTH); Marketing France (Pascale Ogel Le Guen); Marketing Franchise (marco odelli); Marketing Greece (Marianna Konstantinidi (Inherited)); Marketing In-Licensing Director (James Kretsis); Marketing Intercontinental (Timothy Akroyd); Marketing Italy (Alessandro Vasco); Marketing Italy (Giorgio Lippi); Marketing Manager (Andrew Barker); Marketing Manager (Natasha Rees); Marketing Manager (New Influenza Products) (Belinda Anderson); Marketing Manager 745 (Belinda Anderson); Marketing Manager 745 (Gina Kladis); Marketing Manager 745 (Helen Concilia (Inherited)); Marketing Nordic (Petter Olbe); Marketing Portugal (David Ventura); Marketing Product (Rebecca Turner); Marketing Product Administration (Edward Potter); Marketing Program (Michael Chen ); Marketing Program (Michael Chen ?????); Marketing Schweiz (Beatrice Guldimann); Marketing Schweiz (Christoph Schneider); Marketing Spain (Aurea Xumetra); Marketing Specialty Products (Jan Hoesche); Marketing UK (Amandine Faguer); Marketing UK (Eddie Owens (Inherited)); Marketing, China Com Ops (Claire Tang ); Marketing, China Com Ops (Claire Tang ?????); Marketing, Medical Affairs & Market Access Interconti. (Thomas Hauck); Mass Spec Research (Victor Nesati); Master Data & Country Specific (Joachim Leiss); Master Data & Country Specific (Julian Knabeschuh); Master Data (Bruce C Beatty); Master Data (Gilbert Kilchoer); Master Data Management ES (Roland Burkhard); Master Data Management Finished Product (Luana Gauer); Master Data Technology (James G Kirby); Master Data, Maintenance & Development (Julia Schimansky); Master Production Planner - Privigen (Kylie Cramer); Materials Life Cycle Management (Jennifer Chung); Materials Management (Steven E Putlack); McAllen 258 (Ben Samarripas (Inherited)); McAllen 258 (Carlos Floyd); McAllen 258 ACM Area 1 (Felipe Gonzalez); McAllen 258 ACM Area 1 (Marc Garcia); McAllen 258 ACM Area 2 (Monica Contreras); McAllen 258 QA (Esperanza Pina); McKeesport 192 (Miekal Brown); McKeesport 192 (Steven Warheit); McKeesport 192 ACM Area 1 (Aaron Bova); McKeesport 192 ACM Area 2 (Caroline Hoyer); McKeesport 192 QA (Daniel Sullenberger); McKeesport 192 QA (Katherine Parker); Mckinney 276 (Sheneka E Wilson); Mckinney 276 ACM Area 1 (Charles E Baxter IV); Mckinney 276 ACM Area 2 (Andrew Fluharty); Mckinney 276 QA (Roxann L Sandoval); Mech Main Engineer 253 (Desmond Lobo); Mechanic (Thomas Baumann); Mechanical Maintenance (Daniel Hofmann); Mechanical Maintenance (Stefan Schmid); Mechanicsburg 171 (Bernard Thompson); Mechanicsburg 171 (John L Thixton (Inherited)); Mechanicsburg 171 (Michele Purvines-Honzo (Inherited)); Mechanicsburg 171 (Olivia Chung); Mechanicsburg 171 ACM Area 1 (Theodore Rooks); Mechanicsburg 171 ACM Area 2 (Michael Crosby); Mechanicsburg 171 QA (Cyle Starner-Moore); Mechanicsburg 171 QA (Kellie N Buecker); Mechanicsburg 171 QA (Kimm Klisiewicz); Mechatroniker (Doris Nake (Inherited)); Medford 037 (Jane Herrera); Medford 037 ACM Area 1 (Hattie E Johnston); Medford 037 ACM Area 2 (Denise Scarborough); Medford 037 ACM Area 2 (Katrina D Walls); Medford 037 QA (Richard W Smith); Medical (Christina Berchtold); Medical Affair (Claire Morgan); Medical Affairs (David Crump); Medical Affairs (Giulio Barrese); Medical Affairs (Gunnar Philipp); Medical Affairs (Manzhou Hou ); Medical Affairs (Manzhou Hou ?????); Medical Affairs (Michael Haslauer); Medical Affairs (Navin Singh); Medical Affairs (Robert Chan); Medical Affairs (Sebastian Dinatale); Medical Affairs Belgium (Anne Verheyen (Inherited)); Medical Affairs Benelux (Anne Verheyen); Medical Affairs Division (Robert Chan); Medical Affairs Division Hematology and Thrombosis Group (Yasuhiro Terano ??? ?? - ??? ?????); Medical Affairs Division Hemophilia Group (Motohiro Okayasu - ); Medical Affairs Division Immunology & Rare Disease Group (Er Win Hew); Medical Affairs Division Medical Excellence and Operations (Kenji Suwa - ); Medical Affairs Division Medical Excellence and Operations (Kenji Suwa ??? ?? - ?? ????); Medical Affairs Division SID Group (Hiromi Igari ??? ?? - ??? ????); Medical Affairs EU (Damian Gilkerson (Inherited)); Medical Affairs EU (Patrick Sommerer); Medical Affairs France (Jamila Filipecki); Medical Affairs France (Nabil Moumane); Medical Affairs Germany (Paolo Bajcic); Medical Affairs Germany (Patrick Sommerer); Medical Affairs Greece (Evi Baimpou); Medical Affairs Italy (Learco Mottola); Medical Affairs Netherlands (Anne Verheyen (Inherited)); Medical Affairs Nordic (Martin Tenlen (Inherited)); Medical Affairs Nordic (Michael Gr√∂vdal); Medical Affairs Nordic (Stefan Grass); Medical Affairs Project Management (Diane Bracquart); Medical Affairs Russia (Evgeny Rudenko); Medical Affairs Russia (Maria A Lituchaya (Inherited)); Medical Affairs Spain (Jos Aznar-Salatti); Medical Affairs Spain (Jos√© Aznar-Salatti); Medical Affairs Specialty Products (Thomas Machnig); Medical Affairs UK (Alessandro Dos Santos); Medical Affairs UK (Jo Heaton); Medical Affairs of Greater China (Helen Dai ); Medical Affairs of Greater China (Helen Dai ????); Medical Affairs, Americas (Ashesh Gandhi); Medical Affairs, Canada (Ashesh Gandhi (Inherited)); Medical Affairs, Europe (Sankarasubramanian Rajaram); Medical Affairs, Influenza (Karita Ambrose); Medical Affairs, Rapivab (Ashesh Gandhi (Inherited)); Medical Communications, US (Nancy Dougherty); Medical Department Turkey (Hasan Avcu); Medical Excellence and Operations (Mitsuhiro Kuwahara); Medical Excellence and Operations (Robert Chan (Inherited)); Medical Hemophilia Group (Takeo Hirai ??? ?? - ??? ????); Medical Manager (Andrea McCracken); Medical Manager (Anthony Gargano); Medical Manager (Arturo Lopez Larios); Medical Manager (Claire Morgan); Medical Manager (DEBRA BOURKE); Medical Manager (Debra Bourke); Medical Manager (Jane Wheeler); Medical Manager (Julianne Bayliss); Medical Manager (Luis Aversa); Medical Manager 842 (Jane Leong); Medical Manager 842 (MAUREEN THAM); Medical Operations US 2 (Jeanie Chiu); Medical Operations US 3 (Jennifer Hanes); Medical Operations US 3 (John Nelson); Medical Science Liaison Canada (James Mansi); Medical Scientific Liaison (Joana Rodrigues); Medical Scientific Liaison Spain (Jenny Alvarez Nieto); Medical Services (Anna Burek); Medical Unit Medical Information (Ana Claudia Guersoni); Medical Unit ¬ñ Medical Information (Ana Claudia Guersoni); Medical Writing Quality and Publishing (Nerrie Lam); Medical Writing Therapeutic Area Lead (Ellen Krasutsky); Medical Writing (Amy Walton); Medical Writing (Bob Stumpo (Inherited)); Medical Writing (Bob Stumpo); Medical Writing (Midori Kobayashi); Medical Writing (Narelle Bramich (Inherited)); Medical Writing (Narelle Bramich); Medical Writing (Takashi Fukai ??? ?? - ??? ????); Medical Writing (Thomas Verish); Medical Writing - Therapeutic Area Lead (Daniel Wood); Medical Writing - Therapeutic Area Lead (Wolfgang Thielen); Medical Writing ¬ñ Quality and Publishing (Nerrie Lam); Medical Writing ¬ñ Therapeutic Area Lead (Ellen Krasutsky); Medical and Business Support (Antoinette Mangione); Medical and Quality Greater China (Spring Wang); Melrose Park 453 (Jesus A Castillo (Inherited)); Melrose Park 453 (Niki Wells); Melrose Park 453 (Tangerine Tingle); Melrose Park 453 ACM Area 1 (Tangerine Tingle (Inherited)); Melrose Park 453 ACM Area 1 (Tangerine Tingle); Melrose Park 453 ACM Area 2 (Tangerine Tingle (Inherited)); Melrose Park 453 ACM Area 2 (Tangerine Tingle); Melrose Park 453 QA (Andrea Bohnenberger); Melrose Park 453 QA (Kimberly L Strong-Allen (On Leave)); Melrose Park 453 QA (Kimberly L Strong-Allen); Memphis 052 (Christopher Morgan); Memphis 052 (Dorleans Alce); Memphis 052 (Trina Crayton); Memphis 052 ACM Area 1 (Dorleans Alce); Memphis 052 ACM Area 1 (Keoshia N Franklin); Memphis 052 ACM Area 2 (Laundray Carter); Memphis 052 QA (Brooke McKinney); Memphis 052 QA (Jason S Hicks); Mesquite 085 (Amber Robinson); Mesquite 085 (Brenda C Greenfield (Inherited)); Mesquite 085 (Brenda C Greenfield); Mesquite 085 ACM Area 1 (Valinda M Peters); Mesquite 085 ACM Area 2 (Christy Pagel); Mesquite 085 QA (Martin DelAngel); Method Development & Instruments (David Canen); Method Development & Instruments (Todd Canen); Method Development (Tom Barnes); Method Development Group (Anna Rozhkova); Method Development Group (Petra Sebastian); Metrics & Analytics (Christian Spuckti); Metrology (Aur√©lien H√©mon); Metrology (Salvatore DiRusso); Metrology (Union) (Jose Franceschini Mirabal (Inherited)); Metrology (Union) (Michael Memenga (Inherited)); Mgr QC Chemistry (Melissa Damino); Mgr QC Chemistry (Ying Huang); Mgr- QC Immunology (Justine Jaap); Mgr- QC Immunology (Melissa Damino); Mgr-Validation Operations (Nick Morgan); Miami 206 (Ashley Britt); Miami 206 (Ashley Downs); Miami 206 (Yennifer Fernandez); Miami 206 ACM Area 1 (Troy Davidson); Miami 206 ACM Area 2 (Barbara May); Miami 206 QA (Anitha Janardhanan); Miami 206 QA (Aris Herrera); Microbiological QC (Ivana Heckel); Microbiological QC (Nicola Di Maiuta); Microbiology (Sarah Krueger); Microbiology (Torsten Vogt); Microbiology - Enviromental Monitoring (Alison Conroy); Microbiology - Lab (Stacey Wenzel); Microbiology - Lab I (Stacey Wenzel); Microbiology - Utilities (Joshua Deabel); Microbiology - Utilities (Matthew Pocius); Microbiology 1 (MB1) (Silke Litzinger); Microbiology 2 (MB2) (Constanta Ola); Microbiology 2 (MB2) (Morten Ruch); Microbiology Lab (Annett Milling); Microbiology Lab (Breanna Steins); Microbiology Lab 1 (Breanna Steins); Microbiology Validation (Emily Neylon (Inherited)); Microbiology Validation (Natalie Gaffney); Middle East & Africa (EMEA) (Camilla Shen); Middle East & Africa (EMEA) (Mohammed Haggag); Midvale 273 (Joel Gallegos); Midvale 273 (Nicole M Loncon (Inherited)); Midvale 273 ACM Area 1 (Jason Stevens); Midvale 273 ACM Area 2 (Casey Davis); Midvale 273 QA (Madison Reid); Mikrobiology 3 (MB3) (Stephanie Achebach); Minneapolis 414 (Deepesh M Pillai); Minneapolis 414 ACM Area 1 (Abubeker M Osman); Minneapolis 414 ACM Area 2 (Ahmed N Ismail); Minneapolis 414 QA (Diego A Bastidas); Minneapolis 414 QA (Pauline M Pipho); Miramar 214 (Jessica Collins); Miramar 214 (Mary A Paul (Inherited)); Miramar 214 (Tyneka Rene); Miramar 214 ACM Area 1 (Chanique Young); Miramar 214 ACM Area 2 (GUILLERMO ORTIZ); Miramar 214 ACM Area 2 (Sang Nguyen); Miramar 214 QA (Azia Alston); Mishawaka 249 (Marisa Nyikos); Mishawaka 249 (Olivia Arend); Mishawaka 249 ACM Area 1 (Kanesha Young); Mishawaka 249 ACM Area 1 (Sydney Boyle); Mishawaka 249 ACM Area 2 (Lucette Gamble); Mishawaka 249 QA (Leah Lehtomaki); Mitarbeiter Serologisches Labor (Astrid Mather (Inherited)); Mobile 284 (Wesley Stokes); Mobile 284 ACM Area 1 (Doris Osobase); Mobile 284 ACM Area 2 (Demitrius Douglas); Mobile 284 QA (Egypt N Ali); Modul 2 - Team 2 (Marko Elias); Modul 2 - Team 2 (mit Aslantas); Modul 2 - Team 2 (√úmit Aslantas); Modul 2 - Team 3 (Timo Gr√ºn); Modul 2 - Team 4 (Maik Czyrzewski); Monitoring (Arno Karnholz); Monitoring (Roland Portmann); Monitoring - Auswertung (Dominik Mueller); Monitoring Operations / Sampling (Martin Hofer); Montgomery 105 (Trinity J Bell); Montgomery 105 (Trinity J Gamble); Montgomery 105 ACM Area 1 (Shauna M Runk); Montgomery 105 ACM Area 2 (Robyn English); Montgomery 105 QA (Tiffany D Sherman (Inherited)); Montgomery 105 QA (Whitney C Belser); Montgomery 125 (Cathrine M Shimek); Montgomery 125 ACM Area 1 (Cathrine M Shimek); Montgomery 125 ACM Area 1 (Mark Sanders); Montgomery 125 ACM Area 2 (Cathrine M Shimek (Inherited)); Montgomery 125 ACM Area 2 (Monica Miller); Montgomery 125 QA (Chrystal D Carrillo); Montgomery 125 QA (Kimberly J Sanders); Montgomery 198 (Cory Toellner (Inherited)); Montgomery 198 (Gregory Jacobs); Montgomery 198 (Justin N Gronbach); Montgomery 198 ACM Area 1 (Timike Sheehy); Montgomery 198 ACM Area 2 (Sarah Peet); Montgomery 198 QA (Christine C Le); Montgomery 198 QA (Michael Hoedle); Motive Power (Nate Thomas); Motive Power (Union) (David G Mollema (Inherited)); Motive Power (Union) (Nate Thomas (Inherited)); Motive Power (Union) 1 (Nate Thomas (Inherited)); Mt Clemens 261 (Tiffany D Peters); Mt Clemens 261 ACM Area 1 (Lavon Williams); Mt Clemens 261 ACM Area 2 (Michelle S Gibbs); Mt Clemens QA 261 (Melissa Johnson); Muncie 191 (John W Wheelock); Muncie 191 (Rob Garcia (On Leave)); Muncie 191 (Rob Garcia); Muncie 191 ACM Area 1 (Andy Umberger); Muncie 191 ACM Area 2 (Andrea S Young); Muncie 191 QA (Drewleigha B Sarver (Inherited)); Muncie 191 QA (Mary Stegall); Muncie 191 QA (Megan M Sheets); Murfreesboro 128 (Elisabeth Johnson); Murfreesboro 128 (Elisabeth Radigan); Murfreesboro 128 (Nedra N Braden); Murfreesboro 128 ACM Area 1 (Ron Rury); Murfreesboro 128 ACM Area 2 (Samantha Holmes); Murfreesboro 128 QA (Melanie J Carmack); Murfreesboro 128 QA (Michelle Young); Mustang 243 (Sam P Emrich); Mustang 243 ACM Area 1 (Jeff Saylors); Mustang 243 ACM Area 2 (Claire Joyce); Mustang 243 QA (Bill Crye); Mustang 243 QA (Fay Michelsen); N. Charleston 291 (Donte Lazarus); N. Charleston 291 ACM Area 2 (Nichole Bell); N. Charleston 291 QA (Sharon Williams); NA CommOps & Patient Services (Mike Andrews); NA Medical Affairs Operations (Sindhu Pampati); NA Therapeutic Area Lead (Coagulation) (Emmanuel Gutierrez); NA Therapeutic Area Lead (Coagulation) (Monica Richardson); NAT Lab (Kevin N Elliott); NAT Lab (Ricky R Alexander (Inherited)); NAT Lab (Ricky R Alexander); Nampa 505 (David Ensminger (Inherited)); National Accounts Sales-Summit (Mark Faulkner); National City 297 (GABRIEL MACEDO); National City 297 QA (Jessie Aquino); National Hospital Manager (Christine Folland); National Management (Denis Fedorov); National Sales (Toshio Nagata); Nebenanlagen (Andr Wermuth); Nebenanlagen (Andr√© Wermuth); Network Services (Christopher Frank); Network Services (David Mann); Network Services (Don Konemann (Inherited)); Network Services ASIAPAC (Mahesh Narayanan); Network Services Americas (Raj Selvaraj); Network Services II (Partha SARATHY); Neurology Marketing (Jason Reckner); New Center Expansion (John Brennan); New Center Launch (Amon G Samples); New Center Launch 1 (Ian Biehler); New Center Launch 2 (Nicole L Ledbetter); New Center Support 1 (Lindsay K Jameson (On Leave)); New Center Support 1 (Lindsey Jameson); New Center Support 1 (Rey Vargas); New Center Support 1 (Rob Soeun); New Center Support 1.1 (Valerie L Ward); New Center Support 1.2 (Rey Vargas); New Center Support 2 (Amy L Guardiola); New Center Support 2 (Anthony Rheuark); New Center Support 2 (Becca Thomas); New Center Support 2 (Billy R Poole); New Center Support 2 (Marissa Sunanon-Clements); New Center Support 2 .1 (Amy L Guardiola); New Center Support 2.2 (Marissa C Sunanon); New Center Support 2.4 (Becca Thomas); New Hope 163 (Jason L Kelley); New Hope 163 ACM Area 1 (DiJon Jones); New Hope 163 ACM Area 2 (Daniel D Rogge); New Hope 163 QA (Holly S Wahlberg); New Hope 163 QA (Holt Peterson (Inherited)); New Hope 163 QA (Kayla L Stueber); Newark 213 (Angela Bordelon); Newark 213 ACM Area 1 (Stephanie Morrison); Newark 213 ACM Area 2 (Angela Mancinelli); Newark 213 ACM Area 2 (Steve H Sison); Newark 213 QA (Christopher Madden); Newark 213 QA (Taylor Thomas); Niagara Falls 237 (Kimberly Reimer); Niagara Falls 237 ACM Area 1 (Paul Hedley); Niagara Falls 237 ACM Area 2 (Mary Jo Watt); Niagara Falls 237 QA (Wesley Summers); Nogales 108 (April Behnke); Nogales 108 (Brooke S Angulo); Nogales 108 ACM Area 1 (Jorge U Orozco); Nogales 108 ACM Area 2 (Rosa G Martinez); Nogales 108 ACM Area 3 (Guadalupe Ochoa (On Leave)); Nogales 108 ACM Area 3 (Rosa G Martinez); Nogales 108 QA (Cori J Collins (Inherited)); Nogales 108 QA (Martha E Lundberg); Non IVV Bact, Vir, Ster, Env Monitoring (Fenny Ng); Non IVV Potency (Keiran McLeod); Non IVV, Chemistry, Biochem, Immulab (Niki Soteriadis); Non-Process Projects (Jennifer Mastio); Norfolk 513 (Katanya Hall); Norfolk 513 QA (James Foreman); Normal 292 (Jose Patino); Normal 292 (Michael W Solomon (Inherited)); Normal 292 ACM Area 1 (William Molitor); Normal 292 ACM Area 2 (Lana Shepherd); Normal 292 QA (Jennifer Harris); Norman 020 (Laura A Post); Norman 020 (Troy Lee Wheeler); Norman 020 ACM Area 1 (Nicole Bertram); Norman 020 ACM Area 2 (Amanda Doan); Norman 020 QA (Katy L Reynolds); North American Benefits (US & Canada) (Matthew Arscott (On Leave)); North American Benefits (US & Canada) (Matthew Arscott); NorthGlenn 141 (Anna M Coulbourne); NorthGlenn 141 (Becca Charles); NorthGlenn 141 (Daniel Venn (Inherited)); NorthGlenn 141 QA (Ashley R Sewell); Northglenn 141 ACM Area 1 (Jonathan Walling); Northglenn 141 ACM Area 2 (Carlos M Valenzuela); O''Fallon 224 (Lori Carlson (Inherited)); O''Fallon 224 (Tara R Spates Tucker); O''Fallon 224 ACM Area 1 (Jahleia Chieves); O''Fallon 224 ACM Area 2 (Andrea M Catchup); O''Fallon 224 QA (Lori Carlson (Inherited)); O''Fallon 224 QA (Marijo Monroe); O''Fallon 224 QA (Tori Chancellor); OE/BPM LVP (Cheryl King); OE/BPM LVP (Fabrice Gribon (Inherited)); OE/BPM LVP (Stephen Marlow (Inherited)); OF EU (incl. TR) (Sabine H√§mel (Inherited)); OF ROW (incl. ICO) (Anna-Karina Muth); OSI (Jennifer Krupka); OSII (Michael Moses); OSIII (Wilfried Wormsb√§cher); OSIV (Tina W√ºrfel); OTO Programs (Paul Bidez); Oak Park 041 (Latosha Y Floyd (Inherited)); Oak Park 041 (Sherlene Killebrew); Oak Park 041 ACM Area 1 (Sandra Erdman); Oak Park 041 ACM Area 2 (Patrick J Tribble); Oak Park 041 QA (Jessica J Cobey (On Leave)); Oak Park 041 QA (Jessica J Cobey); Ocala 290 (Althea Council); Ocala 290 QA (Jean O''Neal); Oklahoma City 422 (Johnnie K Phares); Oklahoma City 422 ACM Area 1 (Clement C Uzoma); Oklahoma City 422 ACM Area 2 (Ella L Boyd); Oklahoma City 422 QA (Billie E Gilliland); Oklahoma City 422 QA (Hannah E Todroff); Olympia 517 (Trevor Case); Omaha 421 (Christie G Edmisten); Omaha 421 ACM Area 1 (Kristen A Marteny); Omaha 421 ACM Area 2 (Sachin Bhandari); Omaha 421 QA (Christopher Trindle); Omaha 421 QA (Larinda N Johnson); Open Systems (Juerg Clavadetscher (Inherited)); Open Systems (Kapil Taneja); Operational Business Development 1 (Amon G Samples); Operational Business Development 2 (Nicole L Ledbetter); Operational Business Development 9.2 (Laura A Allen); Operational Excellence & Data Analytics (Jason Woolley); Operational Excellence (Claus Peihs (Inherited)); Operational Excellence (Gil Rochat); Operational Excellence (Jan-Christopher Gerlach); Operational Excellence (Jewel Reid); Operational Excellence (Monika Goretzki); Operational Excellence (Murat Dalar (Inherited)); Operational Excellence (Philipp Jeker); Operational Excellence LVP (Gordon Pearson); Operational Prozess 1 (Arnd Vollmerhausen); Operational Readiness Phoenix (Rainer Frank); Operational Services (Clare Schwarz); Operational Services Maintenance & Utilities (Michael Kocher); Operational Support (Laura A Allen); Operations & PV Systems (Shinya Takagawa); Operations (Camila Silva Alvarado); Operations Global Engineering (Gregory Taylor); Operations Global Engineering Projects (Daniel Rouse); Operations Global Sourcing (Trevor Reay); Operations HS Business Integration (Thomas Jede); Operations HS EHSS & Risk (Allan Wise); Operations HS EHSS & Risk (Bob Rezek); Operations HS EHSS & Risk (Lynette Hodgden (Inherited)); Operations HS EHSS & Risk (Lynette Hodgden); Operations HS Engineering (Daniel Rouse); Operations HS Engineering (Gregory Taylor (Inherited)); Operations HS Engineering (Rodney Lam); Operations HS Engineering (Tom Gehrin); Operations HS Engineering Automation (Charles Guy Sorrell Jr.); Operations HS Engineering Automation (James Dion); Operations HS Engineering Fill Finish Equipment (Jeffrey Stowe); Operations HS Engineering Process (James Flockhart); Operations HS Engineering Process (Jason VanderPloeg); Operations HS Engineering Process (Jeffrey Stowe); Operations HS Engineering Project (Daniel Rouse); Operations HS Engineering Project (Eddie Taylor); Operations HS MS&T Process Sciences (Nicholas Mauro); Operations HS MS&T Process Sciences (Richard Hughes); Operations HS MS&T Tech Services (Jason Allaband); Operations HS MS&T Tech Services (Nicholas Mauro); Operations HS MS&T Tech Services Fill/Finish (Kevin McMahon); Operations HS MS&T Tech Transfer (Baron Fulk); Operations HS MS&T Tech Transfer (Tsu-shun Lee (Inherited)); Operations HS MS&T Tech Transfer (Wallace Brisson); Operations HS Maintenance (Jamie Blankenship); Operations HS Maintenance (Leon Montgomery); Operations HS Maintenance Facilities (Bruce A Buckoski); Operations HS Maintenance Facilities (Bruce Buckoski); Operations HS Maintenance Instrumentation (Jamie Blankenship); Operations HS Maintenance Metrology (Michael Mikolajczak); Operations HS Maintenance Process (Ricky Norris (On Leave)); Operations HS Maintenance Process (Ricky Norris); Operations HS Maintenance Support (Daniel Sarvis); Operations HS Maintenance Support (Richard Oliver); Operations HS Maintenance Utilities (Scott Curtis Menut); Operations HS Maintenance Utilities (Scott Menut); Operations HS Manufacturing (Irina Staxen); Operations HS Manufacturing Bulk (Chad M Salisbury (Inherited)); Operations HS Manufacturing Bulk (Eric Hoffman); Operations HS Manufacturing Bulk (Jonathan Kegerise); Operations HS Manufacturing Bulk Downstream (Eric P Hoffman); Operations HS Manufacturing Bulk Downstream (Gordon Dunsmore); Operations HS Manufacturing Bulk Downstream - A Shift (Joseph Chapman); Operations HS Manufacturing Bulk Downstream - B Shift (Evan Burke); Operations HS Manufacturing Bulk Downstream - B Shift (LaToya Jaqui McDuffie); Operations HS Manufacturing Bulk Downstream - C Shift (Joseph Chapman); Operations HS Manufacturing Bulk Downstream - C Shift (Samantha Heyer); Operations HS Manufacturing Bulk Downstream - D Shift (Demitra Earls); Operations HS Manufacturing Bulk Downstream - D Shift (Evan Burke); Operations HS Manufacturing Bulk Support (Elie Chiha); Operations HS Manufacturing Bulk Support - A Shift (Craig Steimle); Operations HS Manufacturing Bulk Support - B Shift (Stephen Blair Donaldson); Operations HS Manufacturing Bulk Support - B Shift (Stephen Donaldson); Operations HS Manufacturing Bulk Support - C Shift (Jonathan Adams); Operations HS Manufacturing Bulk Support - D Shift (Kevin Anthony Smith); Operations HS Manufacturing Bulk Support - D Shift (Kevin Smith); Operations HS Manufacturing Bulk Support Materials (Andrew Passarotti); Operations HS Manufacturing Bulk Support Materials (Elie Chiha (Inherited)); Operations HS Manufacturing Bulk Support Materials (George Barrett); Operations HS Manufacturing Bulk Upstream (Gordon Dunsmore); Operations HS Manufacturing Bulk Upstream (Gordon Kennedy Dunsmore); Operations HS Manufacturing Bulk Upstream (Jeremy Smock); Operations HS Manufacturing Bulk Upstream - A Shift (Billy Trask); Operations HS Manufacturing Bulk Upstream - A Shift (Jeremy Smock); Operations HS Manufacturing Bulk Upstream - B Shift (Chris Austin); Operations HS Manufacturing Bulk Upstream - B Shift (Latisha Blair Tucker Kiker); Operations HS Manufacturing Bulk Upstream - C Shift (Chris Austin); Operations HS Manufacturing Bulk Upstream - C Shift (Maxwell Pote); Operations HS Manufacturing Bulk Upstream - D Shift (Jeremy Smock (Inherited)); Operations HS Manufacturing Bulk Upstream - D Shift (Kevin Donnell Thomas); Operations HS Manufacturing Bulk Upstream - D Shift (Kevin Thomas); Operations HS Manufacturing Fill & Finish (Philip Troughton); Operations HS Manufacturing Fill & Finish (Rodney Lam); Operations HS Manufacturing Fill & Finish - A Shift (Aseptic) (LaToya McDuffie); Operations HS Manufacturing Fill & Finish - A Shift (JOSE SERRANO); Operations HS Manufacturing Fill & Finish - A Shift (Jose Serrano); Operations HS Manufacturing Fill & Finish - A Shift (Non-Aseptic) (Todd Brinkley); Operations HS Manufacturing Fill & Finish - B Shift (Aseptic) (Heather Johnson); Operations HS Manufacturing Fill & Finish - B Shift (Heather Johnson); Operations HS Manufacturing Fill & Finish - B Shift (Non-Aseptic) (Reginald Cox); Operations HS Manufacturing Fill & Finish - C Shift (Aseptic) (William Holder); Operations HS Manufacturing Fill & Finish - C Shift (Keith Bridges); Operations HS Manufacturing Fill & Finish - C Shift (Non-Aseptic) (Keith Bridges (On Leave)); Operations HS Manufacturing Fill & Finish - C Shift (Non-Aseptic) (Keith Bridges); Operations HS Manufacturing Fill & Finish - C Shift (Timothy Hampton); Operations HS Manufacturing Fill & Finish - D Shift (Aseptic) (Jamie Page); Operations HS Manufacturing Fill & Finish - D Shift (Branch Chandler Cannon); Operations HS Manufacturing Fill & Finish - D Shift (Non-Aseptic) (Ivan Morris); Operations HS Manufacturing Fill & Finish Expansion (Aseptic); Operations HS Manufacturing Fill & Finish Expansion (Aseptic) (Branch Cannon); Operations HS Manufacturing Fill & Finish Expansion (Non Aseptic) (Zachary Oakley); Operations HS Manufacturing Fill & Finish Expansion (Rodney Lam); Operations HS Manufacturing Fill & Finish Ops Aseptic (Brian Kennedy); Operations HS Manufacturing Fill & Finish Ops Non Aseptic (Steve Gaspar); Operations HS Manufacturing Fill & Finish Ops ¬ñ Aseptic (Brian Kennedy (On Leave)); Operations HS Manufacturing Fill & Finish Ops ¬ñ Non Aseptic (Steve Gaspar); Operations HS Manufacturing Finite Scheduling (Andrew Covington); Operations HS Manufacturing Finite Scheduling (David Tye); Operations HS Manufacturing Operational Excellence (Don Miller); Operations HS Manufacturing Production Systems (Angel Colucci); Operations HS Manufacturing Production Systems (Angel L Colucci); Operations HS Manufacturing Production Systems (Frederick Goerke); Operations HS Manufacturing Sciences & Technology (Baron Fulk); Operations HS Manufacturing Sciences & Technology (Irina Staxen (Inherited)); Operations HS Manufacturing Sciences & Technology (Jessica Mercer); Operations HS Manufacturing Sciences & Technology (Tsu-shun Lee); Operations HS Manufacturing Small Scale (Ashley Greeney); Operations HS Strategy, Alliance Management & PMO (John Anderson (Inherited)); Operations HS Strategy, Alliance Management & PMO (Raj Kapadia); Operations HS Strategy, Alliance Management & PMO (Vernon Horner); Operations HS Supply Chain & Strategy (Mayumi Buckoski); Operations HS Supply Chain (David Tye); Operations HS Supply Chain Planning (David Tye); Operations HS Supply Chain Warehouse (Nicholas Brown); Operations HS Supply Chain Warehouse (Willie Lam); Operations HS Supply Chain Warehouse - Manufacturing & TD (Christopher Stone); Operations HS Viral Pilot Plant (Claudia Johnson); Operations Holly Springs (John Anderson); Operations Lead ¬ñ Project Banksia (Lisa Lamb); Operations Liverpool (Laura O''Brien); Operations Planning Manager (Damien Nguyen); Operations Procurement (John Molyneux); Operations Procurement Operations (Donald Lacombe); Operations Procurement Operations (John Molyneux (Inherited)); Operations Procurement Operations (Michele Morris); Operations Support (Annette Feussner); Operations Support (Nicole Kay); Operations Support (Uwe Kalina) (Uwe Kalina); Operations Support R&D (Michele Himmelspach); Operative Master Data Management (Maike Pollaschek (Inherited)); Operative Master Data Management (Maike Pollaschek); Ops Capital Portfolio Management (Stefano Siviero); Ops Plasma Support (Walter Aebersold); Orange City 155 (ANNETTE NELSON); Orange City 155 (Faye-Lynn Deissinger); Orange City 155 ACM Area 1 (Nathan J Herchenroder); Orange City 155 ACM Area 2 (BRIAN LOFTUS); Orange City 155 ACM Area 2 (Jenna Smith); Orange City 155 QA (Christina M Kokoszka); Orange City 155 QA (Cindy Romero-Estrada); Orange City 155 QA (Kyle M Lehrke (Inherited)); Organisation / Koordination Diverse (Eva Herzog (Inherited)); Organization Transformation (Andrea Douglas); Organization Transformation (Tod Marks); Organizational Development (Kristen Krebs); Organizational Development (Rachel Day); Orlando 144 (Isabella Bishop); Orlando 144 ACM Area 1 (Ron Fischer); Orlando 144 ACM Area 2 (Trinica D Boyd); Orlando 144 QA (Brittany Woodward); Orlando 144 QA (DeQuandra Belton); Orlando 144 QA (Tiffany D Sherman (Inherited)); Orlando 511 (Jessica Collins); PABS (Uwe Kalina); PABS I (Helene Lang); PABS I (Sarah Plum); PABS I+II (Annette Feussner); PABS II (Christina Kober); PABS II (Maria Hauswald); PABS III (Aaron Hahn (On Leave)); PABS III (Aaron Hahn); PABS III (Stefan Baumeister); PACE (Christian Sonderegger) (53003164); PACE (Markus Decher); PACE APAC Deployment - Organisation and Change Management (Faye Papakalodoukas); PACE ATR (Andrew Croft (Inherited)); PACE ATR (Michael Kochanski); PACE ATR Payment Management (Dennis Martin); PACE Americas Deployment (Shane Kennedy); PACE Asia Pacific (Andrew Croft (Inherited)); PACE Asia Pacific (Metani Rooms); PACE Commercial Deployment (Peter K Tadros); PACE Coordination BRN (Boris Kaiser (Inherited)); PACE Coordination BRN (Christian Sonderegger); PACE ES (Marco Maeder); PACE General Accounting (Eric Fay); PACE Global Distribution (Catherine Gil); PACE Metrics & Analytics (Christian Spuckti); PACE OTC (Kian Hartono); PACE PM Bern (Oliver Bigler); PACE PMO (Tod Marks); PACE PMO (Tod Marks) (Tod Marks); PACE PTI (Wolfgang Schneider); PACE Program (Andrew Croft (Inherited)); PACE S2P (Andrew Croft (Inherited)); PACE S2P (Simon Haemmerli); PACE S2P (TR Kannan); PACE Site Deployment (Kelly L Konemann); PACE deployment Bern Lengnau (Boris Kaiser); PACE sustain (Linda Carducci (Inherited)); PACE sustain (Markus Decher); PAI Dokumentation (Andre Hullmann (Inherited)); PAI Dokumentation (Carsten Meyer (Inherited)); PAI Endfiltration Albumin (Achim Ludwig (Inherited)); PAI Endfiltration Albumin (Achim Ludwig); PAI Fermentation (Tobias Kling); PAI Koordination (Andre Hullmann (Inherited)); PAI Koordination (Bernd Prior (Inherited)); PAI Koordination (Carsten Meyer (Inherited)); PAI Nebenbetriebe (Mario Kornemann (Inherited)); PAI Pasteurisierung (Mario Kornemann (Inherited)); PAI Produktion 1 / Nebenanlagen (Mario Kornemann); PAI Produktion Albumin (Andre Hullmann); PAI Produktion Immunglobuline/ Nebenanl. (Bernd Prior); PAI Produktion PCF H67 (Roger Leukel); PAI Produktion Rekombinante Proteine (Andreas Berting); PAI Produktion Rekombinante Proteine (Carsten Meyer); PAI Prozessmanager (Barbara Kalina (Inherited)); PAI Prozessmanager (Wilfried Freudenberg (Inherited)); PAI Rekombinante Proteine GMP (Carsten Meyer (Inherited)); PAI Subfraktionierung (Mario Kornemann (Inherited)); PAI Systemuntersttzung SAP/MES (Wilfried Freudenberg (Inherited)); PAI Systemunterst√ºtzung SAP/MES (Barbara Kalina (Inherited)); PAI Systemunterst√ºtzung SAP/MES (Wilfried Freudenberg (Inherited)); PAI Training & GMP (Barbara Kalina (Inherited)); PAI Training & GMP (Wilfried Freudenberg (Inherited)); PAI Ultrafiltration / Endfiltration (Alfons Hck (Inherited)); PAI Ultrafiltration / Endfiltration (Alfons H√∂ck (Inherited)); PAI Ultrafiltration Albumin (Martin Doruch (Inherited)); PAI Ultrafiltration Albumin (Martin Doruch); PAI Vorbehandlung / Support (Hans Becker); PAI Vorbehandlung 1 / Support (Hans Becker (Inherited)); PAI Vorbehandlung 2 (Hans Becker (Inherited)); PAI Vorbehandlung 3 (Andreas Koch); PAI Wgekabine (Mario Kornemann (Inherited)); PAI W√§gekabine (Mario Kornemann (Inherited)); PBS Basisfraktionierung & Support (Stefan Vaupel); PBS Basisfraktionierung (Bernhard Tribensky); PBS Basisfraktionierung (Klaus Wilhelm); PBS Planung & Dokumentation (Claus Baudszus); PBS Schichtgruppe 1 (Mario Lorch); PBS Schichtgruppe 2 (Bjrn Klingelhfer); PBS Schichtgruppe 2 (Bj√∂rn Klingelh√∂fer); PBS Schichtgruppe 3 (Andreas Klein); PBS Schichtgruppe 4 (Andreas Kraus); PBS Schichtgruppe 5 (Bernd Hofmann); PBS Schichtgruppe 6 (Bernd Teske); PCS & MES (Frank Mastellone (Inherited)); PCS & MES (Magda Stavaroiu); PCS & MES (Magda-Elena Stavaroiu); PCS (Reto Kamber); PCS Maintenance (Markus Klsle); PCS Maintenance (Markus Kl√§sle); PCS Maintenance (Reto Camastral); PD Projects & Technology Transfer (Steven Honey); PE - Central Region (Gangjian Chen ); PE - Central Region (Gangjian Chen ?????); PE - Central Region 1 (Qin Li ); PE - Central Region 1 (Qin Li ????); PE - Central Region 2 (Gangjian Chen ????? (Inherited)); PE - Central Region 2 (Shu Min ); PE - Central Region 2 (Shu Min ????); PE - DTP, China (Cissy Xi ); PE - DTP, China (Cissy Xi ????); PE - East Region (Zhen Shen ); PE - East Region (Zhen Shen ????); PE - East Region 1 (Xiao Ma ); PE - East Region 1 (Xiao Ma ????); PE - East Region 2 (Guo Jie Yu ?????); PE - East Region 2 (Guojie Yu ); PE - East Region 2 (Guojie Yu ?????); PE - East Region 3 (Liang Xu ); PE - East Region 3 (Liang Xu ????); PE - North Region (David Chen ???? (Inherited)); PE - North Region (Zhixia Wang ); PE - North Region (Zhixia Wang ?????); PE - North Region 1 (Yajuan Wen ); PE - North Region 1 (Yajuan Wen ?????); PE - North Region 3 (Qinghua Zhao ?????); PE - North Region 4 (Hongbin Wang ?????); PE - North Region 4 (Tracy Yu ); PE - North Region 4 (Tracy Yu ?????); PE - South Region (Sam Shang ); PE - South Region (Sam Shang ?????); PE - South Region 1 (Tony Lee ); PE - South Region 1 (Tony Lee ?????); PE - South Region 2 (Ice Li ); PE - South Region 2 (Ice Li ?????); PE - South Region 3 (Yi-yu Zhang ); PE - South Region 3 (Yi-yu Zhang ?????); PE - South Region 4 (Michelle Li ); PE - South Region 4 (Michelle Li ?????); PE - South Region 5 (Gary Chen ); PE - South Region 5 (Gary Chen ?????); PE - West Region (Alex Kong ); PE - West Region (Alex Kong ????); PE - West Region (David Chen ???? (Inherited)); PE - West Region (Shengyan Qiu ?????); PE - West Region 1 (Hao Chen ); PE - West Region 1 (Hao Chen ????); PE - West Region 2 (Jack Liao ); PE - West Region 2 (Jack Liao ????); PE - West Region 3 (Shen Jie ); PE - West Region 3 (Shen Jie ????); PE - West Region 3 (Shengyan Qiu ????? (Inherited)); PE-Central Region 3 (Julia Zhu ); PE-Central Region 3 (Julia Zhu ????); PGI Bulkproduktion M1M2 (Julian Lampmann); PGI Bulkproduktion M1M2 (Sebastian Feisel); PGI Documentation (Patrick Brusius); PGI Koordination (Heiko Schild (Inherited)); PGI Produktion Beriate (Heiko Schild); PGP Bulkproduktion 1 FIX (Philipp Hergenrder); PGP Bulkproduktion 1 FIX (Philipp Hergenr√∂der); PGP Bulkproduktion 1 FIX (Steffen Mbius); PGP Bulkproduktion 1 FIX (Steffen M√∂bius); PGP Bulkproduktion 1 FVIII-B (Gerhard Burk (Inherited)); PGP Bulkproduktion 1 FVIII-B (Gerhard Burk); PGP Bulkproduktion 1 FVIII-H (Henrik Tutsch (Inherited)); PGP Bulkproduktion 1 FVIII-H (Peter Diehl (Inherited)); PGP Bulkproduktion 1 FVIII-H (Peter Diehl); PGP Bulkproduktion 2 FIX (Sebastian Feisel (Inherited)); PGP Bulkproduktion 2 FIX (Timo Mudersbach (Inherited)); PGP Bulkproduktion 2 FIX (Timo Mudersbach); PGP Bulkproduktion 2 FIX (Timo Mudersbach) (Timo Mudersbach); PGP Bulkproduktion 2 FVIII-B (Reiner Bamberger (Inherited)); PGP Bulkproduktion 2 FVIII-B (Reiner Bamberger); PGP Bulkproduktion 2 FVIII-H (Ernst Dittmar (Inherited)); PGP Bulkproduktion 2 FVIII-H (Ernst Dittmar); PGP Bulkproduktion 3 FVIII-B (Frank Burich (Inherited)); PGP Bulkproduktion 3 FVIII-B (Frank Burich); PGP Bulkproduktion 3 FVIII-B (Frank B√§urich (Inherited)); PGP Bulkproduktion 3 FVIII-B (Frank B√§urich); PGP Bulkproduktion 3 FVIII-H (Jrgen Ungemach (Inherited)); PGP Bulkproduktion 3 FVIII-H (Jrgen Ungemach); PGP Bulkproduktion 3 FVIII-H (J√ºrgen Ungemach (Inherited)); PGP Bulkproduktion 3 FVIII-H (J√ºrgen Ungemach); PGP Bulkproduktion 4 FIX (Steffen Mbius); PGP Bulkproduktion 4 FIX (Steffen M√∂bius); PGP Dokumentation (Patrick Brusius); PGP Koordination FIX (Karl-Heinz Wenz (Inherited)); PGP Koordination FIX (Karl-Heinz Wenz (On Leave) (Inherited)); PGP Koordination FVIII-B (Heiko Schild (Inherited)); PGP Modul 2 - Team 1 (Henning Dittmar); PGP Modul 2 - Team 2 (mit Aslantas (Inherited)); PGP Modul 2 - Team 2 (√úmit Aslantas (Inherited)); PGP Modul 2 - Team 3 (Timo Gr√ºn (Inherited)); PGP Produktion Beriate (Heiko Schild); PGP Produktion Faktor IX (Karl-Heinz Wenz (On Leave)); PGP Produktion Faktor IX (Karl-Heinz Wenz); PGP Produktion Haemate / Humate (Henrik Tutsch); PGP Produktion Haemate / Humate (Peter G√ºttner); PGP Prozessmanager (Barbara Kalina (Inherited)); PGP Prozessmanager (Horst Boeder (Inherited)); PGP Pufferherstellung FVIII-B (Bernd Grau (Inherited)); PGP Tagschicht FIX (Ewald Burk); PGP Tagschicht FIX (Timo Mudersbach); PGP Vorbehandlung FVIII-H (Sascha Ludwig (Inherited)); PGP Vorbehandlung FVIII-H (Sascha Ludwig); PIU (Alan Collins); PIU (Christine Fitzpatrick); PIU Team (Christine Riley); PIU/UM Engineering (Peter White); PL-Quality (Carmen Althainz); PL-Quality (Mehmet G√ºm√ºs); PM Hematology and Thrombosis TA (Joanne Uhl (Inherited)); PM Hematology and Thrombosis TA (Mark Kleinman); PMR Dokumentation (Wilfried Freudenberg (Inherited)); PMS (Hideo Usui - ); PMS (Hideo Usui ??? ?? - ??? ????); PMS (Masashi Nakayama); PNS (Ibtisam Saeed); PNS Manufacturing (Hosmer Perez); PPD / Technical Operations Marburg (Michael Moses); PPD Bern Admin (Eliane Bossart); PPD BioAnalytical Science (Patrick Schuetz); PPD CMC Bern (Philipp Angerer); PPD Impurity & Data Mngt (Patricia Lieby); PPD Investigations (Thomas Kilchoer); PPD Investigations 2 (Tino Boss); PPD Investigations I (Janine Bash); PPD Process Development - R&D (Hal Braley); PPD Process Development - R&D (Kathryn Scott); PPD Process Development - R&D (Yvette Citrine); PPD Process Development 2 (Ibrahim El Menyawi); PPD Process Development 2 Group 1 (Eva Blatter); PPD Process Development 2 Group 2 (Robin Das Gupta); PPD Process Development 2 Group 3 (Adrian Alder); PPD R & D Bioanalytics BMW (Mark Bailey); PPD R&D KOP (Kenneth Walsh); PPD R&D Marburg (Martin Vey); PPD Technical Operations (Birgit Unterweger); PPD Technical Operations (Michele Himmelspach); PPD, Process Development (Eric Zhu); PPM (Roberta Duncan (Inherited)); PPM Research (Heather Davis); PPM Technical (Heather Davis); PQG Look Back / PDI (Patricia Herrmann); PQG Plasma Control (Iryna Zabolotna); PRP Support (Heinz-J√ºrgen Merkel); PRP Vorbehandlung (Thorsten Theis); PRP GMP-Koordination (Heinz-Jrgen Merkel); PRP GMP-Koordination (Heinz-J√ºrgen Merkel (Inherited)); PRP GMP-Koordination (Heinz-J√ºrgen Merkel); PRP Logistik (Robert Schfer); PRP Logistik (Robert Sch√§fer); PRP Lsungsherstellung & Wiegebereich (Robert Schfer (Inherited)); PRP L√∂sungsherstellung & Wiegebereich (Robert Sch√§fer (Inherited)); PRP Support (Yanina Broadnax); PRP Support 1 (Steffen Ramb); PRP Vorbehandlung (Thorsten Theis (Inherited)); PRP Vorbehandlung (Thorsten Theis); PRP Vorbehandlung 1 (David Grb); PRP Vorbehandlung 1 (David Gr√§b); PRP Vorbehandlung 1 (Fabian Feisel); PRP Wareneingang (Evelin Kaiser-Felsmann); PRP Wareneingang (Yanina Broadnax); PRP Wareneingang Team 1 (Sebastian Siebert); PRP W√§gebereich (Heinz-J√ºrgen Merkel (Inherited)); PTC (Severin Thierau); PTH Abfllung 1 (Alexander Muth (Inherited)); PTH Abfllung 2 (Michael Kroker (Inherited)); PTH Abfllung 2 (Michael Kroker); PTH Abfllung 3 (Nils Rder); PTH Abfllung 4 (Bjrn Schmidt); PTH Abf√ºllung 1 (Pascal Nau (Inherited)); PTH Abf√ºllung 2 (Michael Kroker (Inherited)); PTH Abf√ºllung 1 (Lars Nau); PTH Abf√ºllung 1 (Pascal Nau (Inherited)); PTH Abf√ºllung 1 (Pascal Nau); PTH Abf√ºllung 2 (Michael Kroker (Inherited)); PTH Abf√ºllung 2 (Michael Kroker); PTH Abf√ºllung 3 (Alexander Jegel); PTH Abf√ºllung 3 (Rainer Lepper (Inherited)); PTH Abf√ºllung 3 (Rainer Lepper); PTH Abf√ºllung 4 (Bj√∂rn Schmidt); PTH Abf√ºllung 4 (Heiko Steinbach); PTH Albumin & Visual Inspection (Jrg Nickel); PTH Albumin & Visual Inspection (J√∂rg Nickel); PTH GMP Coordination (Matthias Klein (Inherited)); PTH GMP-Coordination (Jrg Nickel (Inherited)); PTH GMP-Coordination (J√∂rg Nickel (Inherited)); PTH Optische Kontrolle 1 H069 (Bernd Balzer (Inherited)); PTH Optische Kontrolle 1 H069 (Bernd Balzer); PTH Optische Kontrolle 2 H069 (J√∂rg Nickel (Inherited)); PTH Optische Kontrolle 2 H069 (Valentina Kufeld (Inherited)); PTH Optische Kontrolle 2 H069 (Valentina Kufeld); PTH Optische Kontrolle 3 H069 (Jrg Nickel (Inherited)); PTH Optische Kontrolle 3 H069 (J√∂rg Nickel (Inherited)); PTH Optische Kontrolle 3 H069 (Meike D√∂rbecker (Inherited)); PTH Optische Kontrolle 3 H069 (Meike D√∂rbecker); PTH Processmgr Pretreatment Refludan&Bul (Matthias Klein (Inherited)); PTH Servicefunktion (Sabine Fischer); PTH Teilfertigung H069 (Alexander Muth); PTH Teilfertigung H069 (Daniel Schneider); PTH Teilfertigung H069 (Tim Westphal); PTH Teilfertigung Koordination (Daniel Schneider (Inherited)); PTH Teilfertigung Koordination (Tim Westphal (Inherited)); PTH Vorbehandlung & Support (Peter Koch); PTH Vorbehandlung 3 / Abfllung 3 H069 (Uwe Fritsch); PTH Vorbehandlung 3 / Abf√ºllung 3 H069 (Uwe Fritsch); PTH Vorbehandlung&Support (Peter Koch (Inherited)); PTI EM Lead (Susan Clough); PTM Abfllung M305 (Tim Westphal); PTM Abf√ºllung M305 (J√∂rg Dieterich); PTM Abf√ºllung M305 (Tim Westphal); PTM Betriebsservicefunktion M305 (Jennifer Hilscher (Inherited)); PTM Betriebsservicefunktion M305 (Jennifer Hilscher); PTM Betriebsservicefunktion M305 (Reinhard Grn (Inherited)); PTM Betriebsservicefunktion M305 (Reinhard Grn); PTM Betriebsservicefunktion M305 (Reinhard Gr√ºn (Inherited)); PTM Betriebsservicefunktion M305 (Reinhard Gr√ºn); PTM GMP Koordinator (Esther Seidel (Inherited)); PTM GT-Anlage M305 (Peter Dersch (Inherited)); PTM Optische Kontrolle M305 (Alexandra G√ºnther (Inherited)); PTM Optische Kontrolle M305 (Elke Stauss (Inherited)); PTM Optische Kontrolle M305 (Elke Stauss); PTM Projekte / Technik (Esther Seidel (Inherited)); PTM Prozessmanager (Esther Seidel (Inherited)); PTM Teilfertigung M305 (Alexandra Gnther); PTM Teilfertigung M305 (Alexandra G√ºnther); PTM Visuelle Kontrolle (Julia Dworschak); PTM Vorbehandlung M305 (Eckhard Brickum (Inherited)); PV Agreements Lead (Andrea Kergl); PV Excellence and Compliance (Gina Granada); PV Quality Management Lead (Gina Granada); PV Safety (Tomoko Yanagawa); PWI Chromatographie & Fllung H68 (Dietmar Grebe); PWI Chromatographie & F√§llung H68 (Dietmar Grebe); PWI Faktor I / XIII Schichtgruppe 7 (Bj√∂rn Bartelme√ü); PWI Faktor I / XIII Schichtgruppe 7 (Horst Schneider); PWI Faktoren I & XIII (Jochen Khler); PWI Faktoren I & XIII (Jochen K√∂hler); PWI GMP-Koordination (Heinz-J√ºrgen Merkel (Inherited)); PWI Inhibitoren (Wilfried Happel); PWI Koordination (Jochen Khler (Inherited)); PWI Koordination (Jochen K√∂hler (Inherited)); PWI Koordination (Wilfried Happel (Inherited)); PWI Logistik (Robert Sch√§fer); PWI L√∂sungsherstellung & Wiegebereich (Robert Sch√§fer (Inherited)); PWI Regeneration & Vorbehandlung H68 (Marc Wellner); PWI Support (Heinz-J√ºrgen Merkel); PWI Tagdienst (Roger Ochs); PWI Teilbereich (Christoph Bernitt); PWI Training & GMP (Jochen Khler (Inherited)); PWI Training & GMP (Jochen K√∂hler (Inherited)); PWI Training & GMP (Wilfried Happel (Inherited)); PWI Vorbehandlung (Thorsten Theis (Inherited)); PWI Vorbehandlung (Thorsten Theis); PWI Wareneingang (Evelin Kaiser-Felsmann); PWI W√§gebereich (Heinz-J√ºrgen Merkel (Inherited)); PWI-H68-Schicht (Dietmar Grebe (Inherited)); PWI-H68-Schicht (Marc Wellner (Inherited)); PWI-H68-Tagdienst (Dietmar Grebe (Inherited)); PWI-H68-Tagdienst (Marc Wellner (Inherited)); PWI-H68-Tagdienst (Marc Wellner); PWI-M305 (Manuel Lotz); PWI-M305 Schicht 1 (Fabian Cybulski); PWI-M305 Schicht 2 (Florian Scherer (Inherited)); PWI-M305 Schicht 2 (Florian Scherer); PWI-M305 Schicht 3 (Fynn Krieger); PWI-M305 Tagdienst (Robert H√∂hne) (Robert H√∂hne); Packaging & Supply (Claus Peihs); Packaging & Supply (Helmut Robert Euler (Inherited)); Packaging & Supply (Viktor Krecker); Packaging & WHS (Armin Stcklin); Packaging & WHS (Stefan Kaelin); Packaging (Andrew Baxter); Packaging (Brian T White); Packaging (Bruno Baeriswyl); Packaging (Kate (Shortall) Lamont); Packaging (Kate Lamont); Packaging (Kate Shortall); Packaging (Othmar Geisser); Packaging (Pasquale Carestia (Inherited)); Packaging (Thomas Royal); Packaging (Union) (Brian T White); Packaging (Union) (Pasquale Carestia (Inherited)); Packaging (Union) (Thomas Royal); Packaging Day Shift 2/6 (Tanja Maegert); Packaging Day Shift 4/5 (Jelka Golob); Packaging Design (Josue Stoll); Packaging Development (Claude Morf); Packaging Development (Markus Maus); Packaging Diverse (Jrg Dieterich (Inherited)); Packaging Diverse (J√∂rg Dieterich (Inherited)); Packaging Evening Shift 1/3/6 (Shabbir Ahmad Sheikh); Packaging Evening Shift 2/4/5 (Nebojsa Milosevic); Packaging I (Pasquale Carestia (Inherited)); Packaging Line 1 (Daniel Fankhauser); Packaging Line 1 (Marianne Steuri); Packaging Line 2,3,7 (Jelka Golob); Packaging Line 4 (Nebojsa Milosevic); Packaging Line 5 (Bashkim Redzepi); Packaging Line 6 (Tanja Maegert); Packaging Materials Testing & Release (Dominik Corbet); Packaging Operations (Claus Peihs (Inherited)); Packaging Operations (Jrg Dieterich); Packaging Operations (J√∂rg Dieterich); Packaging Operations (Murat Dalar); Packaging Teams (Bernd Baum); Packaging and Inspection (David Hartley (Inherited)); Packaging and Inspection (Joey Tranquilino); Packaging, Design & Artwork (Metin Yilmaz); Packing Material Control PMC (Dominic Wuest); Packing Material Control PMC (Nicole Moser); Packing Material Control PMC 2 (Denise Engimann); Packing Team Leader (Adam Heath); Packing Team Leader (Beau Williams); Packing Team Leader (David Nguyen); Packing Team Leader 451 (Robert De Santis); Pain Business Unit Director (Michael Grant); Palm Bay 254 (Cari N Howard); Palm Bay 254 (Latora (LaLa) Boswell); Palm Bay 254 ACM Area 1 (John Fuller); Palm Bay 254 ACM Area 1 (KIARA CAGLE); Palm Bay 254 ACM Area 2 (Latora (LaLa) Boswell); Palm Bay 254 ACM Area 2 (Lori Leinas); Palm Bay 254 QA (Regine Jean Gilles (On Leave)); Palm Bay 254 QA (Regine Jean Gilles); Pandemic (Lorna Meldrum); Parenteral Manufacturing (AlbuRx Filling) (Daniel Derakhshanian); Parenteral Manufacturing (AlbuRx Filling) (Union) (Daniel Derakhshanian); Parenteral Manufacturing (AlbuRx) (Nick Bonavita); Parenteral Manufacturing (AlbuRx) (Union) (Nick Bonavita); Parenteral Manufacturing (Mindy Randazzo); Parenteral Manufacturing (Thomas Royal); Parenteral Manufacturing (Union) (Mindy Randazzo (Inherited)); Parenteral Manufacturing (Union) (Thomas Royal (Inherited)); Parkersburg 178 (Jeff Hay); Parkersburg 178 (Lachelle Mosholder); Parkersburg 178 (Lenora Lada); Parkersburg 178 ACM Area 1 (Alissa Sindelar); Parkersburg 178 ACM Area 1 (Lenora Lada); Parkersburg 178 ACM Area 2 (Lachelle Mosholder); Parkersburg 178 ACM Area 2 (Lenora Lada (Inherited)); Parkersburg 178 QA (Amanda M Cvitkovich); Parkersburg 178 QA (Christina Prunty); Parma Heights 162 (Olivia Arend); Parma Heights 162 (Sue Collins); Parma Heights 162 ACM Area 1 (Lindsy Wolf); Parma Heights 162 ACM Area 2 (Mirela Sekulic); Parma Heights 162 ACM Area 2 (Seanna Penn); Parma Heights 162 QA (Deborah Robinson); Paste & Final Product Planning (Martin Sutter); Patents and Licenses (Hans-Peter Hauser); Patient Engage & Feas (Rodney Winley); Pay Services (Brian T Simeur); Payroll DE (Claudia Rogge); Pensacola 623 (Nicole K Stassen); Pensacola 623 ACM Area 1 (Esteban Facundo); Pensacola 623 ACM Area 1 (Timothy J Nisewonger); Pensacola 623 ACM Area 2 (Esteban Facundo); Pensacola 623 ACM Area 2 (Timothy J Nisewonger); Pensacola 623 QA (Jessica L Ford (On Leave)); Pensacola 623 QA (Jessica L Ford); Pensacola 623 QA (Matthew T Zisa); Pensacola 623 QA (Melodee C Ebel (Inherited)); Peoria 133 (Mark A Yellen); Peoria 133 (Patrick S Taylor); Peoria 133 ACM Area 1 (DeAnn K Benally); Peoria 133 ACM Area 1 (Patrick S Taylor (Inherited)); Peoria 133 ACM Area 2 (Patrick S Taylor (Inherited)); Peoria 133 ACM Area 2 (Seanna Penn); Peoria 133 QA (LaVona M Holt); Peoria 289 (Dennis Popek); Peoria 289 (Nicholle DeVecchi); Peoria 289 ACM Area 1 (Holly Worsfold); Peoria 289 ACM Area 2 (Lew Carney); Peoria 289 QA (Kali Trevino); Performance Management (Ken Lain); Pharmaceutical Development (Martin Alex Imboden); Pharmacodynamic (Marc Nolte); Pharmacodynamic 1 (Padmapriya Ponnuswamy); Pharmacodynamic 2 (Subhajit Ghosh); Pharmacokinetic (Oliver Ghobrial); Pharmacokinetic (Sabine Pestel); Pharmacology & Toxicology (Eva Herzog); Pharmacometrics (Theresa Yuraszeck); Pharmacovigilance systems (Sahil Sahni); Pharmacovigllance Quality (Wumi McDowall); Pharmakanten (Carmen Walldorf (Inherited)); Pharmakanten (Doris Nake (Inherited)); Philadelphia 145 (Kristen Aydin); Philadelphia 145 (Rene Benson-Skone); Philadelphia 145 (Robert W Gillespie); Philadelphia 145 ACM Area 1 (Ken Laguerre); Philadelphia 145 ACM Area 2 (Kevin Lambrecht); Philadelphia 145 ACM Area 2 (Rene Benson-Skone (Inherited)); Philadelphia 145 QA (Kim Van Houten); Philadelphia 147 (Derek Morner); Philadelphia 147 (John L Thixton (Inherited)); Philadelphia 147 (Michele Dionne); Philadelphia 147 (Theresa Mwimbwa); Philadelphia 147 ACM Area 1 (Jennifer Foxworth); Philadelphia 147 ACM Area 1 (Robinrenee Dorsey); Philadelphia 147 ACM Area 2 (Robinrenee Dorsey); Philadelphia 147 ACM Area 2 (Rose Marie Waddle); Philadelphia 147 QA (Alissa Elke); Philadelphia 147 QA (John L Thixton (Inherited)); Philadelphia 147 QA (Leslie Jones); Philadelphia 147 QA (Samantha J Schrepel); Pilot Plan Manufacturing Team (Stefanie Ronzheimer); Pilot Plant (Christian Schlachtbauer); Pilot Plant (Jarvis Hammitt); Pilot Plant (Klaus-Jrgen Schlitt (Inherited)); Pilot Plant (Leander Trachsel); Pilot Plant (Norbert Egon Juettner); Pilot Plant Group (Lukas Sterchi); Pilot Plant II (Franziska Naef); Pilot Plant II (Lukasz Lubecki); Pilot Plant Lengnau (Joel Zumstein); Pilot Scale Operations (Chris Horridge); Pilot Scale Operations (Daniela Mocanu); Pilot Scale Operations (Heidi Bergman); Pilot Scale Operations (Jeffrey Bourke); Pilot Scale Operations (Maggie Aziz); Pilot Scale Operations (Mark Simmonds (Inherited)); Pilot Scale Operations (Mark Simmonds); Pilot Scale Operations (Paul Gibbs); Pilot Scale Operations (Rob Hooper); Pilot Scale Operations (Sharon Orr); Pilot Scale Operations (Tien Vo); Pilot Scale Operations (Tim Hanna); Pilot Scale Operations (Ursula Macaskill); Pilot Scale Operations 1 (Jessica McGiffin); Pinellas Park 139 (Brett Goldman); Pinellas Park 139 (Leah J Davis); Pinellas Park 139 (Robin G Spencer); Pinellas Park 139 ACM Area 1 (Alesia Davenport); Pinellas Park 139 ACM Area 1 (Lynn M Stratton); Pinellas Park 139 ACM Area 2 (Alesia Davenport); Pinellas Park 139 ACM Area 2 (Christina Goodrich); Pinellas Park 139 QA (Dana Pagano); Pinellas Park 139 QA (Lynn M Stratton); Pinnacle Training Site Las Vegas (Yennifer Fernandez); Pinnacle Training Site Pinellas Park (Lauren Hardy); Pittsburg 269 (Esence Hambrick); Pittsburg 269 ACM Area 1 (Dan Lassige); Pittsburg 269 ACM Area 2 (Tammy Toth); Pittsburg QA 269 (Marianne Brown); Pittsburgh 269 (Marianne Brown); Pittsburgh 269 ACM Area 1 (Dan Lassige); Pittsburgh 269 ACM Area 2 (Tammy Toth); Pittsburgh QA 269 (Melanie Kauffman); Planning (Christoph Krug); Planning (Stephan Obrecht); Planning (Tabitha Dineen); Planning Maintenance (Andr√© Hasler); Planning Maintenance (Oliver Bigler); Plant & Clean Utilities (Nozar Basseri); Plant Engineering & Services (Beat Meyer); Plant Engineering (Benjamin Reh); Plant Engineering (Michael Kleinehanding); Plant Engineering Mgr 255 (Anthony Wrzesinski); Plant Engineering Mgr 255 (Stuart Barnes); Plant Engineering Mgr 255 (Timothy Travis); Plant Finance (Justin Mericle); Plant Finance (Melissa Gottschall); Plant Finance (Vlad Kirylau); Plant Finance II (Vlad Kirylau); Plant Finance Product Costing & Capital (Michael McAvoy); Plant Operations (Vinko Momiroski); Plant Utilities (Hansruedi Brunner); Plasma & Raw Material Release (Stefan Tepfenhart); Plasma Center Management (Jincai Zhu ); Plasma Center Management (Jincai Zhu ?????); Plasma Contract Management (Linda S Romalin); Plasma Finance (Jason Mugridge); Plasma Fractionation (John Nelson); Plasma Fractionation (Jordan Wright); Plasma Fractionation (Union) (John Nelson (Inherited)); Plasma Fractionation (Union) (Jordan Wright); Plasma Logistic Center (Peter Nau); Plasma Logistic Center Dallas Supervisor (Brandon W Wornick); Plasma Logistic Center Dallas Supervisor (Brandon Wornick); Plasma Logistics Center Dallas (Carey L Fleener); Plasma Logistics Center Indy (Chad Simeur); Plasma Logistics Center Whitestown (Chad Simeur); Plasma Logistics Centers (Michael J Frecker); Plasma Management (Jack Zhang ?????); Plasma New Development (Jake Zhang ); Plasma New Development (Lixia He ?????); Plasma Operation, Quality (Qingqing Wang ); Plasma Operation, Quality (Qingqing Wang ?????); Plasma Operations (Eveline Kindler); Plasma Operations (Timo Fuhrmann); Plasma Operations Finance; Plasma Operations and Quality (Eric Li ); Plasma Operations and Quality (Eric Li ?????); Plasma Operations and Quality Management (Jeffrey A Schulz); Plasma Pay Services (Karen D Vellutini); Plasma Product Development (Douglas Lee (Inherited)); Plasma Product Development (Michael Zachner); Plasma Products Bulk Operations (Barbara Kalina); Plasma Quality (Lixia He ); Plasma Quality (Lixia He ?????); Plasma Quality Management (Laura O''Brien); Plasma Quality/Deviations (Stefan Kaelin); Plasma Receipt & Haemostasis (Narelle Urli); Plasma Receipt & Haemostasis (Sean Flannery); Plasma Receipt (Brendan Smale); Plasma Receipt (Roger Hand); Plasma Receipt (Tommy Tovilo); Plasma Release (Daniel Schwarz); Plasma Release (Si√© Kigninlman Coulibaly (Inherited)); Plasma Resources US (David H Confessore (Inherited)); Plasma Resources US (Debra A Hood); Plasma Resources US (Shane Kennedy); Plasma Sourcing Management (Lixia He ?????); Plasma and Manufacturing Finance (Ted Kanigowski); Plasma and Manufacturing Finance (Ted Kanigowski) (Ted Kanigowski); Plasmapreparation (Andreas Reber); Plasmapreparation (Erich Nuessle); Pleasant Grove 046 (Eduardo Williams); Pleasant Grove 046 (Vicky Sablan); Pleasant Grove 046 ACM Area 1 (Chad Pagel); Pleasant Grove 046 ACM Area 2 (Ebony Q McGee); Pleasant Grove 046 QA (Pamela R Mendoza); Pontiac 121 (Ashley M Jamieson (Inherited)); Pontiac 121 (Melissa Johnson); Pontiac 121 (Mondel Hightower); Pontiac 121 ACM Area 1 (Tracey L Boyd-McCorkle); Pontiac 121 ACM Area 2 (Mondel Hightower (Inherited)); Pontiac 121 ACM Area 2 (William D Owens); Pontiac 121 QA (Ashley M Jamieson (Inherited)); Pontiac 121 QA (Rebecca Barrons (On Leave)); Pontiac 121 QA (Rebecca Barrons); Pontiac 121 QA (Rodnesia R Jackson); Port Arthur 176 (Karen Sauceda); Port Arthur 176 ACM Area 1 (Dannetta Abdel-Malek); Port Arthur 176 ACM Area 1 (Karen Sauceda (Inherited)); Port Arthur 176 ACM Area 2 (THU RAMOS); Port Arthur 176 QA (Angela Redd); Port Author 176 QA (Angela Redd); Port Author 176 QA (Michael Thompson); Port Authur 176 (Karen Sauceda); Port Authur 176 (Lauren Hardy); Port St Lucie 072 (Kashaun Muhammad (Inherited)); Port St Lucie 072 (Mario A Salas); Port St Lucie 072 ACM Area 1 (Adam Davis); Port St Lucie 072 ACM Area 2 (Vanessa Sanon); Port St Lucie 072 ACM Area 3 (Garrett J Royal); Port St Lucie 072 QA (Raquel Reyes (On Leave)); Port St Lucie 072 QA (Raquel Reyes); Portage 187 (Dom Moceri); Portage 187 (Richard McCoy); Portage 187 ACM Area 1 (DERREK CRUMP); Portage 187 ACM Area 1 (Jeffrey Ott (On Leave)); Portage 187 ACM Area 2 (DERREK CRUMP); Portage 187 ACM Area 2 (Nikki Bradley); Portage 187 QA (Mitch A Quinn); Portage 187 QA (Stephanie Gower); Portfolio & Project Management (Heather Davis); Portfolio & Project Management (Roberta Duncan); Portfolio Management (Joel Hanson); Potency 1 (Dave De Witte); Potency 1 (Johanna Mock); Potency 2 (Martina Treutlein); Potency Testing Final Product 1 (Johanna Mock); Potency Testing Final Product 2 (Martina Treutlein); Potency Testing Intermediates 1 (Jan Bursy); Potency Testing Intermediates 1 (Marika Midon); Potency Testing Intermediates 2 (Wilfried Peil); Preclinical Innovation (Fabian Kaesermann); Preclinical Innovation (Jennifer Brasseit); Preclinical Innovation (Kleanthis Fytianos); Preclinical Innovation (Rolf Spirig); Pricing (Paul Jens (Inherited)); Pricing (Stephanie Kupski); Primary Automation (Gary Steele); Primary Automation (Stephen Callaghan); Primary Manufacturing (Matthew Burrows); Primary Packaging & Medical Devices Bern (Frank Bamberg); Primary Packaging & Medical Devices Bern (Renzo Pedrussio); Primary Packaging & Medical Devices Bern I (Monica Tavanti); Primary Packaging & Medical Devices Marburg (Ahmad Abdul Fattah); Primary Packaging & Medical Devices Marburg (Thomas Pfeifer); Primary Process Engineering (Asad Akhter); Primary Utility Projects (Russell Peak); Primary and Warehouse Validation (James Swann); Privigen Bulk & Facility Operations (Robert Skok); Privigen Bulk (George Barlas); Privigen Bulk (George Makris); Privigen Bulk (Jeremy Campbell (Inherited)); Privigen Bulk (Jeremy Campbell); Privigen Bulk (Kellie Goodman); Privigen Bulk (Lanie Hynninen); Privigen Bulk (Ritaben Suhagiya); Privigen Marketing (Robert Zegel); Privigen/AlbuRx Processing (Peter Klasen); Privigen/AlbuRx Processing Team Leader (Areti Kaloyannis); Process Validation (Berangere Lingat); Process Validation (Fergus Hawes); Process Validation (Peter Trimcevski); Process & Project Engineering (Duncan Benson); Process Analyst Lead (Kate Goossens); Process Analytics & Scale-up (Michael Bieri); Process Analytics & Scale-up (Tobias Heck); Process Change Program (Anita Kohl-Truebenbach (Inherited)); Process Change Program (Jeffrey Ball); Process Control Manager (Vincent Chung (Inherited)); Process Development (Hubert Metzner); Process Development (Michael Bartkovsky); Process Development 1 (Martin Alex Imboden (Inherited)); Process Development 2 (Ibrahim El Menyawi); Process Development Bern (Kurtis Allan Epp); Process Development Bern (PDB) (Kurtis Allan Epp); Process Development Bern I (Maria Crespo Solans); Process Development Bern I, Team 1 (Madlene von Knel); Process Development Bern I, Team 1 (Madlene von K√§nel); Process Development Bern I, Team 2 (Jonathan Eras); Process Development Bern II (Ibrahim El Menyawi); Process Development Bern II, Team 1 (Eva Blatter); Process Development Bern II, Team 2 (Marcus von Nordheim); Process Development Bern II, Team 3 (Adrian Alder); Process Development Bern II, Team 4 (Matthias Spiess); Process Development Bern III (Simon Gerber); Process Development Bern III, Team 1 (Robin Das Gupta); Process Development Bern III, Team 2 (Adrian Alder); Process Development Bern III, Team 3 (Eva Blatter); Process Development Bern III, Team 4 (Jos√© Ures); Process Development Group 1 (Robin Das Gupta); Process Development Group 2 (Eva Blatter); Process Development I & PP (Jennifer Krupka); Process Development I (Charles Arnold); Process Development I (Maike Glaser); Process Development I (Roopsee Anand); Process Development I (Uwe Liebing (Inherited)); Process Development I (Uwe Liebing); Process Development II (Jennifer Krupka); Process Development II (Katrin Anders); Process Development II (Kenneth Maas); Process Development III (Klaus Schmitt); Process Development, Data Science (Maya Shevlyakova); Process Engineering (Donall O Cualain); Process Engineering (Duncan Benson (Inherited)); Process Engineering (Gemma Parkes); Process Engineering (Markus Rentsch); Process Engineering (Michael Bieri); Process Engineering (Sean Goudy); Process Engineering Form & Fill (Emanuella Barbosa Lopes Souza Leao); Process Equipment & Technology (Benno Bitterli); Process Experts (Nicole L√∂ffelholz); Process Improvement (Deborah Mansfield); Process Improvement (George Thomas); Process Improvement (Jason Woolley (Inherited)); Process Improvement Mgr, PNS (Jerjess Chahoud); Process Management (Dawn Myers); Process Management Admin PGI (Antje Rder); Process Management Admin PGI (Antje R√∂der); Process Management Admin PGI (Oliver Draht); Process Migration (Ian Falcao); Process Migration (Paul Martell); Process Migration (Tony Renna); Process Migration Automation PU (Muditha Hasthanayake); Process Migration E&I (Paul O''Brien); Process Migration Project Engineer (Alice Dinh); Process Migration Project Engineer (Anna Martell); Process Science (Annette Gaida); Process Science (Annette Gaida-Benz); Process Science (Stefan Schulte); Process Science 2 (Arnaud Vonarburg); Process Science Upstream Lead (Sandra Grunske); Process Scientists Fractionation (Bernhard Wyss); Process Seed (Jennifer Kelly-Martland); Process Seed (John Cooney); Process TD (Adam Bentley); Process TD (Lisa-Marie Foulkes); Process Validation & Tech Transfer (Stefan Schulte); Process Validation (Berangere Lingat (Inherited)); Process Validation (Berangere Lingat); Process Validation (Fergus Hawes); Process Validation (Indi Staffa); Process Validation (Jesse Richter (Inherited)); Process Validation (Jessica Parletta); Process Validation (Peter Trimcevski); Process Validation - Stability (Jessica Mackellin); Process, Aseptic and Shipping Validation (Clare O''Donnell); Processes (Ferdinand Marx); Processes and Metrics (Eberhard Fitzner); Procurement (Barbara Beugger (Inherited)); Procurement (Brigitte Kimpel-Koch [C]); Procurement (Sreevatsan Sridharan); Procurement Lengnau (Narin Hermez); Procurement Lengnau (Pierre Bersier); Procurement Liverpool (Ian Goldup); Procurement Liverpool (Rachael Close); Procurement Liverpool (Trevor Reay (Inherited)); Procurement Operations (Juerg Kauer); Procurement Operations (Robert Di Giacomo); Procurement Operations (Sue Savage); Procurement Operations (Taylor Saak); Procurement Operations (Thomas Schneider); Procurement Operations - Liverpool (Rachael Close); Procurement Operations - Liverpool (Rachel Shaw); Procurement Operations I (Taylor Saak); Procurement Operations Manager (Marion Fitchett); Prod Manager - Formulations441 (Jamie Aaron Morris); Prod Manager - Formulations441 (Paul Morrison); Prod Mgr - Packaging (Garth James); Prod Mgr - Packaging (MARILYN BARAKIA); Product Care & Layout (Viviana Solange Fluxa Rojas); Product Care (Bill Chambers (Inherited)); Product Care (Markus Christen); Product Care (Patrick Nolte); Product Care (Samantha Czako (On Leave)); Product Care (Samantha Czako); Product Care (Thorsten Keller); Product Care (Valerie Schaffer); Product Care (Viviana Solange Fluxa Rojas (Inherited)); Product Care Mgmt (Andrea Lehmann); Product Characterisation (Matthias Zimmermann); Product Characterisation (Robert Dickinson); Product Characterization (Carsten Horn); Product Costing & Inventory Controlling (Anika Wagner); Product Costing & Inventory Controlling (Dirk Achenbach); Product Development (David Glover (Inherited)); Product Development (David Glover); Product Development (Fiona Bunworth); Product Development (Matthias Zimmermann); Product Disposition (Amber Hall); Product Disposition (Gennin Snyder); Product Education (David Chen ????); Product Education (Wei Chen ); Product Education (Wei Chen ????); Product Expertise (Paul Sinclair); Product Group Hemophilia (Claudia Zacharias); Product Group Hospital Products; Product Group Hospital Products (Bianca Petzold); Product Group Hospital Products (Michael Bernd Rode (Inherited)); Product Group ID (Richard Sodmann); Product Innovation (Fabian Kaesermann (Inherited)); Product Innovation (Fabian Kaesermann); Product Innovation (Rolf Spirig); Product Innovation (Susann Cattepoel); Product Market Authorization & QA Russia & CIS (Galina Senchukova); Product Ownership - Biotherapies (Anita Kohl-Truebenbach); Product Ownership - Biotherapies (Paul McKenzie (Inherited)); Product Release (Christine Peter); Product Release (Patricia Loftus); Production & Strategic Planning (Matthias Christl (On Leave)); Production & Strategic Planning (Matthias Christl); Production (Craig Byham); Production BCI/C1 INHIB (Peter Gttner); Production BCI/C1 INHIB (Peter G√ºttner); Production Engineering (ANDREW HISLOP); Production Engineering (Andre Majchrzak); Production Engineering (Anisa Moghaddam); Production Engineering (Antonio Ciocca); Production Engineering (Cameron Simpson); Production Engineering (Campbell Anderson); Production Engineering (Candy Lee); Production Engineering (Damien Barri); Production Engineering (Dion Houtman); Production Engineering (Jason Fletcher); Production Engineering (Karen Noonan); Production Engineering (Kate McConnell); Production Engineering (Melissa Nicholson); Production Engineering (Reza Mohebian); Production Engineering (Richard Friar); Production Engineering (Richard Hayne); Production Engineering (Tom Graham); Production Engineering (Tom Kelland); Production Engineering 1 (Geoff Wang); Production Manager (Cassandra Smoult); Production Manager (Jamie Aaron Morris); Production Manager US (Ljubi Huseinovic); Production Manager, PNS 448 (Keiran Ragas); Production Marburg (Frank Emmerich); Production Marburg (Michael Schr√∂der); Production Planning (Kyle Popham); Production Supervisor 454 (Kara Davine); Production Support (Jeffrey Spicer); Production Support (Marcus O''Dwyer); Produktion Inhibitoren PGI (Barbara Kalina (Inherited)); Produktion Inhibitoren PGI (Stefan Wellnitz); Produktion Inhibitoren PGI (Wilfried Happel); Produktion Inhibitoren Schicht 1 (Fabian Cybulski); Produktion Inhibitoren Schicht 2 (Arkadius Kaczmarczyk (Inherited)); Produktion Inhibitoren Schicht 2 (Arkadius Kaczmarczyk); Produktion Inhibitoren Schicht 3 (Manuel Lotz); Produktion Inhibitoren Schicht 4 (Fynn Krieger); Produktion Inhibitoren Schicht 4 (Manuel Cuesta Linker); Produktion Inhibitoren Tagdienst (Florian Scherer); Produktion RPF300 (Anika Knack); Produktion RPF300 (Mara Saglam); Produktion Rekombinante Proteine & Support (Carsten Meyer); Produktion Rekombinante Proteine & Support (Viktor Krecker); Produktion Wundheilungsprparate M300 1 (Meik Dietrich); Produktion Wundheilungsprparate M300 2 (Jrg Schmidt); Produktion Wundheilungsprparate M300 3 (Bjrn Bartelme); Produktion Wundheilungsprparate M300 4 (Willi Drr); Produktion Wundheilungsprparate M300 5 (Rainer Jesberg); Produktion Wundheilungsprparate M300 6 (Udo Wagner); Produktion Wundheilungspr√§parate M300 1 (Meik Dietrich); Produktion Wundheilungspr√§parate M300 2 (J√∂rg Schmidt); Produktion Wundheilungspr√§parate M300 3 (Bj√∂rn Bartelme√ü); Produktion Wundheilungspr√§parate M300 3 (Christoph Bernitt); Produktion Wundheilungspr√§parate M300 4 (Willi D√∂rr); Produktion Wundheilungspr√§parate M300 5 (Rainer Jesberg (On Leave)); Produktion Wundheilungspr√§parate M300 5 (Rainer Jesberg); Produktion Wundheilungspr√§parate M300 6 (Udo Wagner); Produktionsfachkraft Chemie (Carmen Walldorf (Inherited)); Produktionsfachkraft Chemie (Doris Nake (Inherited)); Program Management R&D Building (Carsten Skill); Programme Management (Anthea Stephenson); Project (Joe Fielding [C]); Project Aurora Automation (Mukesh Muruganandan) (Mukesh Muruganandan); Project Automation (Michael Kraft); Project BCI (Kristin Eschrich); Project Controls and Commercial Assurance (Daniel Boltz); Project Delivery & Support (Christopher A Betterton); Project Delivery & Support (Matt Shapiro); Project Delivery & Support (Robert Boland (Inherited)); Project Delivery EU/APAC (Nick Furmston); Project Delivery KAN (Michael Hansen (Inherited)); Project Edge Commercial (Drew Hansen); Project Edge Finance (Daya Salter); Project Edge Finance (John Dinatale (Inherited)); Project Edge Logistics (John Dinatale (Inherited)); Project Edge Logistics (Steve Wilson [C]) (Steve Wilson [C]); Project Edge Procurement (Emma Hopwood); Project Edge Quality (Glenn Barbrey); Project Edge Quality (John Dinatale (Inherited)); Project Engineering (Daniel Weniger); Project Engineering (Duncan Benson); Project Engineering (Volker Teuchert); Project Ldr Improve & Compl (Michael Dunn); Project Ldr Improve & Compl (Thomas Nguyen); Project Logistic Centre Lahntal (Thomas Schwarz); Project Management (Bryan J Hoover); Project Management (Mark Ridge); Project Management CV TA (Julie Waterbury); Project Management Office (Chris Abell); Project Management Office (Emily Brown); Project Management Office (Geoffrey Rea [C]); Project Management Office (Jose Gonzalez (Inherited)); Project Management, AU/Asia (Alex Vaine); Project Management, Europe (Elaine DiMonte); Project Management, Europe (Katharine von der Fecht); Project Management, North America (Elaine DiMonte); Project Manager (Andrei Fedorov); Project Manager (Conal O''Mahony); Project Manager (Heiko V√∂lpel (Inherited)); Project Manager (Jack Hung); Project Manager (Victor Karafilis (Inherited)); Project Support & Technical Transfer (Andreas Berting); Project Upgrade H69 (Thomas Schwarz); Project-Portfolio Delivery (Robert Boland); Project/Portfolio Delivery (Tod Marks); Projekt Phoenix (Markus Ries); Projekt-Koordinator (Claus Peihs (Inherited)); Projekt-Koordinator (Jrg Dieterich (Inherited)); Projekt-Koordinator (J√∂rg Dieterich (Inherited)); Projekt-Koordinator (Murat Dalar (Inherited)); Protein Biochemistry (Eric Salgado); Protein Research R&D (Nathan J Brinkman); Protinus (Marius Liesch); Protinus (Sandra Kaempfer); Prozessgruppe 1 (Christoph Pfeiffer); Prozessgruppe 1 (Daniel Weniger (Inherited)); Prozessgruppe 1 (Marko Witt); Prozessgruppe 2 (Frank Heck); Pt. St. Lucie 072 ACM Area 1 (Adam Davis); Pt. St. Lucie 072 ACM Area 2 (Vanessa Sanon); Pt. St. Lucie 072 ACM Area 3 (Garrett J Royal); Publishing Site Marburg (Jrg Starker); Publishing Site Marburg (J√∂rg Starker); Publishing Site Marburg Diverse (J√∂rg Starker (Inherited)); Puffer (Rainer Frank (Inherited)); Puffer (Torsten Jeide); Pulmonology-Europe (Michael Larbig); Purchasing (Alfonso Albornoz); Purchasing (Bob Siegel); Purchasing (Mark W Hartmann); Purchasing I (Alfonso Albornoz); Q Fever Team Leader D443 (Marcus O''Dwyer); Q Fever Team Leader D443 (Paul Williams); Q-Operation (Isabelle Crauser); Q-Operation (Marco Maeder); Q-Oversight End Products (Urs Pflugshaupt); QA - Batch Release & PTCs (Daniel Powell); QA - Batch Release & PTCs (Peter Tyler); QA - Batch Release (Astrid Mellor); QA - Batch Release (Tracy Owens); QA Batch Release (Constanze Buchter); QA Batch Release (Nicole Kortelainen); QA Batch Release (Randolph Rimando); QA Complaints Mangement (Rhonda L Luhrsen); QA Compliance (Berangere Lingat); QA Compliance (Craig Stephens (Inherited)); QA Compliance (JEFFREY ZOUBEK); QA Compliance (Jeffrey Zoubek); QA Compliance (Kimberly E Lorenz (Inherited)); QA Compliance (Mark Dickson); QA Cont Imp & Iss Mgt (Sharon Thornton); QA Fill/Finish (Lindsay Griffiths); QA Manager (Nicola Rotherham); QA Manufacturing (Alex Hargreaves); QA Manufacturing (Natalie Steele); QA Manufacturing (Tracy Owens); QA Operations (Dave Kowalski); QA Operations API (Anthony Nelson); QA Operations API (Kyle Showalter); QA Operations Bldg 30 (Anastasia Lindsey); QA Operations Bldg 33 (Alison York); QA Operations Bldg 33 (Candice Nieves (Inherited)); QA Operations Bldg 33 (Jill Shafer); QA Operations Bulk (Candice Nieves); QA Operations Bulk (Cassandra Clevenger); QA Operations Coagulation (Nicholas Gluckleder); QA Operations Coagulation 2 (Kelly Kucera); QA Operations Fractionation (Alison York); QA Operations Fractionation (Jacquelyn O''Malley); QA Operations II (Meggan R Smith); QA Operations II (Sarah Milone); QA Operations IPW (Kimberly Desler); QA Operations IPW (Meggan R Smith); QA Operations IPW (Sarah Milone); QA Operations IPW I (Sarah Milone); QA Operations Parenteral (Dave Kowalski (Inherited)); QA Operations Parenteral (Michael Urbanczyk); QA Plasma/Deviations (Eva Streit); QA Plasma/Deviations (Si√© Kigninlman Coulibaly); QA Primary Manufacturing (Jocelyn Bryson); QA Process and Facilities / Stability (Marco Haas); QA Processes & Facilities (Dieter Bathier); QA Processes & Facilities (Ivo Lakomy); QA Processes & Facilities (Michel Baur); QA Processes & Facilities (Silvia Schmutz); QA Product Release (Joanna Madafferi); QA Product Release (Stephanie St.Martin); QA Projects Compliance Team (Danielle Moloney); QA Projects Compliance Team (Stoyan Atanasov); QA Release (Angelos Borobokas); QA Release (Aoife Corrigan); QA Release (Cherie Mclaren); QA Release (Craig Stephens (Inherited)); QA Release (Francesco Intoccia); QA Release (Ivo Lakomy); QA Release (Karin Hofstetter); QA Release (Katie Wood); QA Release (Manuel Selvaggio); QA Release (Marion Jeffrey); QA Release (Neil Del Castillo); QA Release (Rosemary Hill); QA Release 1 (Aoife Corrigan); QA Release 1 (Cherie Mclaren); QA Release 1 (Craig Stephens (Inherited)); QA Release FAR Compliance (Margrit Waterval); QA Release FAR Release (Natalie Helfer); QA Release FAR Review (Fabienne Thoenen); QA Release IG/CYT (Silvia Schmutz); QA Release IGC Compliance (Dagmar Riffel); QA Release Process Engineering (Michael Zachner); QA Secondary (Daniel Powell); QA Systems (Christian Eggel); QA Systems (Connie Costanzo); QA Systems (Craig Stephens (Inherited)); QA Systems (Dina El-Emary); QA Systems (Lorenz Rindisbacher (Inherited)); QA Systems (Malte Kr√§mer); QA Systems (Maryanne Pashalis); QA Systems (Michel Baur (Inherited)); QA Systems (Nancy Phan); QA Systems (Nassima Wilson); QA Systems (Nina Klee); QA Systems (Simone Naruhn); QA Systems (Sue Ireland); QA Systems (Susanne Deyhle); QA Systems (Tony Smith); QA Technical Support (Amanda Cooper); QA Validation (Jeff Mihaichuk (Inherited)); QA Validation (Jeff Mihaichuk); QA Validation (Stephen R Grey); QA Validation - Site Expansion (Jeff Mihaichuk (Inherited)); QA Validation I (Jeff Mihaichuk (Inherited)); QA and Regulatory Affairs SI (Aldelberto Cordova); QAI Quality Albumin, Immunoglob., Plasma (Martin Krah); QAO Compliance (Dagmar Riffel); QAO Manufacturing (Ines Joachim); QAO Manufacturing (Natalie Helfer); QAO Release (Natalie Helfer); QAO Release (Silvia Schmutz); QAO Sustain & Improve (Stefan Kaelin); QBR FVIII & FIX QoF (Anja Beetz); QBR FVIII & FIX QoF (Anja Beetz-Kroll); QBR PWI QoF (Torsten Cyriax); QBR rekombinante Proteine QoF (Nancy Georgieff); QBS Rotational Program (Ulrich Schuerch); QC (Gillian McAdam); QC (Rebecca Gannon); QC - Chemistry (Anna Melia); QC - Chemistry (Anthony Pellegrini); QC - Chemistry (Jimmy Pajarillo); QC - Chemistry (Marie Neophytou); QC - Chemistry (Michael Streule); QC - Microbiology (Angie Fifis); QC - Microbiology (Claire Abson); QC - Microbiology (Denise Vella); QC - Microbiology (Dinesh Raj Methuku); QC - Microbiology (Dozie Okafor); QC - Microbiology (Elsie Everson); QC - Microbiology (Grace Luong (Inherited)); QC - Microbiology (Grace Luong); QC - Microbiology (Karthy Santhosh); QC - Microbiology (Maria Arulruban); QC - Microbiology (Marika Moore); QC - Microbiology (Maruthi Shivananda); QC - Microbiology (Patricia Hughes); QC - Microbiology (Tyson Parker); QC Analytical & Raw Materials (Nick Brendon); QC Analytical & Raw Materials (Victoria Fairclough); QC Analytical Services Manager (Andrea Prendergast); QC Bioassay (Adrian Gee); QC Bioassay (Victoria Fairclough); QC Chemistry (Jenny Staff); QC Chemistry (Robert Zanon); QC Chemistry (Ying Huang); QC Chemistry Team Leader (Niki Soteriadis); QC Chemistry Team Leader (Ying Huang); QC Compliance (Ignazio Lamonica); QC Compliance Support (Lisa Walters); QC Compliance and Improvement (Lisa Marie Malcharek); QC Immunochemistry (Andre Lamarque (Inherited)); QC Immunochemistry (Andre Lamarque); QC Immunochemistry (Caroline Abdul-hay); QC Immunochemistry (Fatima Bartils); QC Immunochemistry (Georgina McKay); QC Immunochemistry (Sean O''Keefe); QC Immunochemistry (Tahlor Robson (Inherited)); QC Immunochemistry (Tahlor Robson); QC Immunochemistry (Xiaowen Chin); QC Immunology (Melissa Damino); QC Immunology Team Leader (Anna Gruszka); QC Immunology Team Leader (Corina Zahra); QC Immunology Team Leader (Grace Huynh); QC Immunology Team Leader (Michelle Reckerman); QC Labs (Dawn Nagel); QC Micro Manager (Rita Simopoulos); QC Micro Team Leader (Dan Balod); QC Micro Team Leader (Prue Shanahan); QC Microbiology (Denise Vella); QC Microbiology (Georgia Ieronymakis); QC Microbiology (Maria Moeller); QC Microbiology (Nicola McDonald); QC Microbiology and Sterility Assurance (Dozie Okafor); QC PNS + Other Non-IVV Prod (Dan Balod); QC Projects (Hannah Kay); QC Projects (Stephen Pearson (Inherited)); QC Sample Logistics (Billy Patel); QC Stability (Victoria Mason (On Leave)); QC Stability (Victoria Mason); QC Stability (Victoria Wilson (On Leave)); QC Stability (Victoria Wilson); QC Stability Coordination (Jonathan Whitehead); QC Support (Andrea Prendergast); QC Support (Jennifer Chung); QC Support (Lucero Perdomo Cruz); QC Support (Philip Elliott (Inherited)); QC Support (Stephen Pearson); QC Support (Sushil Deswal); QC Support Systems (Jenny Higgins); QC Validation (Hayley Mackin); QC Validation (Jeff Hancock (Inherited)); QC-Microbiology (Anja Djordjevich); QC-Microbiology (Kah Wen Lee); QC-Microbiology (Tahlor Robson); QC/QA (Alex Hargreaves); QC/QA (Natalie Steele); QCP BRR (Verena Specowius); QCP QC Support & PTC/QCP, QFP (Mirko Altenkmper); QCP QC Support & PTC/QCP, QFP (Mirko Altenk√§mper); QFP Filling AQL (Lina Matschke); QFP Filling H69 QoF (Christoph Kalfack); QFP Filling H69 QoF (Ingo Kischka); QFP Filling M305 ABW (Sandra Benthin); QFP Filling M305 BRR/CC (Verena Specowius); QFP Filling M305 QoF (Stefan Paul); QGP Quality Coagulation (Jrgen Keitel); QGP Quality Coagulation (J√ºrgen Keitel); QM Production (Monika Christen); QM Production (Monika Krebs); QM Qualification & Validation (Bettina Vgerl); QM Qualification & Validation (Bettina V√∂gerl); QM Qualification (Gnter Fehlberg-Sternemann); QM Qualification (G√ºnter Fehlberg-Sternemann); QM Validation (Mickael Boegli); QMB ES (Samuel Mann); QMB Operations (Jonathan Imhof); QO / Aseptic (Michelle Hogg); QP/QA Product Release (Jocelyn Bryson); QPPV (Anna Rozmyslowicz); QPPV (Giovanni Furlan); QSP Quality Supply Chain & Packaging Op. (Sybille Bertram); QTH Quality Teilfertigung H69 (Guido Kagemann); QTM Quality Teilfertigung M305 (Murat Dalar (Inherited)); QTM Quality Teilfertigung M305 (Wolfgang List); QWI Inhibitors, Fibrinogen+Vaccines (Antonia Preidel); QoF Endfertigung (Christoph Croon); QoF Endfertigung (Jeanette Ludwig); Qualification (Angela Hamrock-Fox (Inherited)); Qualification (Angela Hamrock-Fox); Qualification (Annabel Wang); Qualification (Bozana Dujak); Qualification (Chris Richter); Qualification (Ilija Najdovski); Qualification (Jonathan Nixon); Qualification (Judith Kennedy); Qualification (Judith Youd); Qualification (Lorraine Murphy); Qualification (My Linh Ly); Qualification (Peter Carver); Qualification (Purush Devanathan); Qualification (Rainer Kraus); Qualification (Rolf Ingold (Inherited)); Qualification (Selda Yildiz Kaya) (Selda Yildiz Kaya); Qualification (Susan Clough); Qualification - Systems (Susan Clough); Qualification - systems (Darren Geary); Qualification System (Michael Kocher); Qualification System (Nadine Aeschbacher); Qualification System (Nadine Jost); Qualifizierung I Schwerpunkt Bulk (Michael Dospil); Qualifizierung II Schwerpunkt Teilfertigung (Michael Kuhn); Qualifizierung III Schwerpunkt Automatisierung (Lionel Guthneck); Qualifizierung IV Schwerpunkt Re-Qualifizierung (Ingo Kischka); Qualifizierung IV Schwerpunkt Re-Qualifizierung (Rainer Kutsch); Qualifizierung Lengnau (Thomas Cragnolini); Quality & Business Services (Karen Etchberger); Quality & Compliance UK (Jonathan Sheard); Quality & Med Svcs (Francesc Pont); Quality & Safety Management R&D (Dominik Blaser); Quality (Craig Stephens (Inherited)); Quality (David Atkinson); Quality (Ernest Shepard); Quality (Helmut Robert Euler); Quality (Jeffrey A Alcorn (Inherited)); Quality (Jill Allen); Quality (Jose Gonzalez (Inherited)); Quality (Kimberly E Lorenz); Quality (Mark Dickson); Quality (Matthew Donegan); Quality (Michelle Kelley); Quality (Robin A Mroz); Quality (Scott Overton); Quality (Vasilis Mavrogenis); Quality (Wei Wei ); Quality (Wei Wei ????); Quality (Yun Zhao (Inherited)); Quality (Yun Zhao ????); Quality 1 (David Atkinson); Quality Applications (Jason VanGils); Quality Assurance & Systems (Kelley L Hyatt); Quality Assurance (Anuja Prabhutendolkar); Quality Assurance (Connie Stewart); Quality Assurance (Ryo Ohnishi ??? ? - ???? ???? (Inherited)); Quality Assurance (Sanae Uchida (Inherited)); Quality Assurance Division (Ryo Ohnishi - ); Quality Assurance Division (Ryo Ohnishi ??? ? - ???? ????); Quality Assurance I (Connie Stewart); Quality Assurance II (Terry L Fritz); Quality Assurance Operations (Ivo Lakomy); Quality Assurance Projects Compliance (Craig Stephens (Inherited)); Quality Assurance Projects Compliance (Eoin Hanley); Quality Assurance System Group (Ryo Ohnishi ??? ? - ???? ???? (Inherited)); Quality Assurance System Group (Sanae Uchida (Inherited)); Quality Assurance Systems (Markus Schriewer); Quality Assurance, HS (Jonathan Kegerise); Quality Assurance, LVP (Gillian McAdam); Quality Assurance, PKV (Fiona Smith); Quality Assurance, PKV (Karen Netherton (Inherited)); Quality Bulk and Release QBR (Petra Hintz-Obertreis); Quality Chemistry (Cassie Norton); Quality Compliance (Sandra F Osborne); Quality Control (Juergen Liedtke); Quality Control (Leonora Pancho); Quality Control (Manuel Selvaggio); Quality Control (QC) (Dominik Stadler); Quality Control (QC) Ops Support (Brigitte Siani); Quality Control (Rene Bruegger); Quality Control Development (Andreas Affolter); Quality Control Services (Manuel Selvaggio (Inherited)); Quality Control Specialist (Andrea Chalker (Inherited)); Quality Control Specialist (Lakmini Croner); Quality Control Specialist (Linh Vo); Quality Control Support (Pascal Hulliger); Quality Control Support (QCS) (Christoph Wyss); Quality Control Team Leader (Andrea Chalker); Quality Control Team Leader (Chris O''Meara); Quality Control, LVP (Rebecca Gannon); Quality Control, LVP (Simon Harwood); Quality Controll Support QCS (Sebastian Klzer); Quality Controll Support QCS (Sebastian K√∂lzer); Quality Coordination ECI (Viviana Solange Fluxa Rojas); Quality Deviation & CAPA Management (Michael Rudolf); Quality Document Control (Michael Gough); Quality Enterprise Learning Management (Amy Love); Quality Filling H69 (Jens Huft); Quality Final Product QFP (Murat Dalar); Quality Global QA Technical Development (Monica Rose); Quality HS QA 3rd Party Manufacturing (Eric Blaesing); Quality HS QA Document Control (Aaron Ward); Quality HS QA Document Control (Cara Miller Kell); Quality HS QA Engineering & Validation (Petra Smith); Quality HS QA Fill Finish Expansion (Sarah Stearns); Quality HS QA Front Line (Laura Caldwell); Quality HS QA Front Line Days (1st Shift) (Laura Caldwell (Inherited)); Quality HS QA Front Line Days (1st Shift) (Nicholas Brown); Quality HS QA Front Line Incident Management (Dominic Greene); Quality HS QA Front Line Incident Management (Laura Caldwell (Inherited)); Quality HS QA Front Line Nights (2nd & 3rd Shift) (Karam Farhan); Quality HS QA Front Line Weekends (4th & 5th Shift) (Erminio Alesii); Quality HS QA Manufacturing (Stephenie Robertson); Quality HS QA Manufacturing Batch Release Bulk (Jennifer Deinarowicz); Quality HS QA Manufacturing Batch Release Fill Finish (Marianne Perelstein); Quality HS QA Manufacturing Batch Release (Amy Love); Quality HS QA Manufacturing Batch Release (Jonathan Kegerise (Inherited)); Quality HS QA Manufacturing Batch Release ¬ñ Bulk (Jennifer Deinarowicz); Quality HS QA Manufacturing Batch Release ¬ñ Fill Finish (Marianne Perelstein); Quality HS QA Manufacturing Batch Release-PTC (Troy Greene Jr); Quality HS QA Manufacturing Incident Management (Dominic Greene); Quality HS QA Manufacturing Shopfloor (Brian Leising); Quality HS QA Manufacturing Shopfloor (Jonathan Kegerise (Inherited)); Quality HS QA Manufacturing Shopfloor Bulk Days (Stephaine McMillan Eads); Quality HS QA Manufacturing Shopfloor Bulk Nights (Nicholas Alexander Brown); Quality HS QA Manufacturing Shopfloor FF Days (Elliott Tatum); Quality HS QA Manufacturing Shopfloor FF Days (Joseph A Marti); Quality HS QA Manufacturing Shopfloor FF Nights (Michael Mikolajczak); Quality HS QA Supplier & Third Party Management (Aaron Ward); Quality HS QA Supplier & Third Party Management (Jessica Mercer (Inherited)); Quality HS QA Systems & Compliance (Jessica Mercer); Quality HS QC Biochemistry (Geremy Knapp); Quality HS QC Biochemistry (Richard H Steere); Quality HS QC Chemistry (Gina Stick); Quality HS QC Chemistry (Raymond Otchere-Adjei); Quality HS QC Immunology (Geremy Knapp); Quality HS QC Logistics (Kelly Jenness); Quality HS QC Logistics (Laura Matulevich); Quality HS QC Microbiology (Liz Strickland); Quality HS QC Microbiology (Roland Jason Jacques); Quality HS QC Microbiology (Sarah Strickland); Quality HS QC Validation & Change (Jessica Loshia Gambill); Quality HS QC Virology (Geremy Knapp); Quality HS QC Virology (Geremy William Knapp); Quality HS Quality Control (Jessica Mercer); Quality HS Quality Control (Rebecca Gannon); Quality HS Quality Control (Stephen Case); Quality HS Training & Workforce Development (Jessica Mercer (Inherited)); Quality HS Training & Workforce Development (Jonathan Kegerise (Inherited)); Quality HS Validation (Amy Russell); Quality HS Validation (Brian Nunnally (Inherited)); Quality HS Validation Bulk & Warehouse (Mark Holland); Quality HS Validation Fill Finish, QC & FacOps (Amy Russell (Inherited)); Quality HS Validation Fill Finish, QC & FacOps (Amy Russell); Quality HS Validation Fill Finish, QC & FacOps (Megan Crandall); Quality HS Validation Process & Aseptic (Brian Nunnally (Inherited)); Quality HS Validation Process & Aseptic (Christopher Lee); Quality HS Validation Process & Aseptic (Matthew Franks); Quality Improvement (Marc Christeller); Quality Improvement (Sandra Soverna); Quality Italy (Annarita Cinardo); Quality Knowledge Management (Sarah S Lemons); Quality Lab (Russ Reeves); Quality Management (Adam Robb); Quality Management (Craig Stephens); Quality Management (Dina El-Emary); Quality Management (Jeffrey A Alcorn (Inherited)); Quality Management (Juergen Liedtke); Quality Management (Lorenz Rindisbacher); Quality Management (Michel Baur); Quality Management (Niklas Schier); Quality Management (Paul Martell); Quality Management (Philip Elliott); Quality Management (Reiner Laske); Quality Management (Reiner Laske, Niklas Schier); Quality Management (Susanne Jecklin); Quality Management 2 (Manuel Selvaggio); Quality Management E&S (Michael Kocher); Quality Management E&S (Rolf Ingold); Quality Management Engineering (Alexandra Rompf); Quality Management Strategy & Op Excellence (Collins Onyejese); Quality Management System (Eileen DiRita); Quality Management Systems (Justin Huss); Quality Operations (Carolyn M Koerner); Quality Operations, Liverpool (Karen Netherton); Quality R & D (Bradley Jackson); Quality R & D (Sharon Reinhard); Quality Review & Improvement Management (Uwe Dohmen); Quality Review Management & Trending (Uwe Dohmen); Quality Shared Services (Barbara Hicks); Quality Site Operations HS (Brian Nunnally); Quality Supply Chain US Distribution (Karen Marks (Inherited)); Quality Supply Chain US Distribution (Samantha Wentzell); Quality Supply Chain US Distribution (Stephanie Condi); Quality System Validations (Jeffrey Berry); Quality Systems & Compliance (Mai Viholm); Quality Systems & Compliance (William Cunningham); Quality Systems & Compliance Auditing & Inspections (Marcela Rojas); Quality Systems & Compliance Auditing & Inspections HS (Aaron Ward); Quality Systems & Compliance Auditing & Inspections LVP (William Cunningham); Quality Systems & Compliance Auditing & Inspections PKV (Marcela Rojas (Inherited)); Quality Systems & Compliance HS (Milka Smoljko (Inherited)); Quality Systems & Compliance QA IT (Anthony Pickering); Quality Systems & Compliance Shared Services (Sarah Lemons); Quality Systems & Compliance Shared Services EDMS (Robbie Gay); Quality Systems & Compliance Shared Services GLIMS (Helen Mihaljevic); Quality Systems & Compliance Shared Services LMS (Cara Miller Kell); Quality Systems & Compliance Supplier Management HS (Gina Stick); Quality Systems (Alan Cordero); Quality Systems (Brandi C Robinson); Quality Systems (Brandi Kennedy); Quality Systems (Karen M Cory); Quality Systems (Margaret A Clifton); Quality Systems (Michael Gough); Quality Systems (Micheal Casaus); Quality Systems (Michelle J Siegel); Quality Systems (William Cunningham (Inherited)); Quality Systems - Trackwise (Maggie Bradley); Quality Systems 1 (Kristen Gist); Quality Systems Boca (Micheal Casaus); Quality Systems I (Alan Cordero); Quality Systems II (Michelle J Siegel); Quality Systems IT (Nicole Nolan); Quality Systems IT (Tim Jones (Inherited)); Quality Systems Management I (Sigrid Streichert); Quality Systems and Standards (Sophie Chairs); Quality Systems and Standards (Vicky Lioutas); Quality Validation (Rudolf Beutler); Quality and Compliance (Harumi Ishizuka); Quality and Standards (Shinya Takagawa); Quality system (Eric Li ?????); R&D (Russell Basser); R&D - Albumin/Immunoglobulin (Joseph Bertolini); R&D - Albumin/Immunoglobulin (Karl McCann); R&D - Albumin/Immunoglobulin (Robert Forrest); R&D - Albumin/Immunoglobulin (Vladimir Gurevich); R&D - Haemostasis (Ayse Kara); R&D - Haemostasis (Hung Pham); R&D - Haemostasis (Kathryn Scott); R&D - Haemostasis (Kelly Lo Presti); R&D - Haemostasis (Maria Panayi); R&D - Haemostasis (Norm Mancuso (Inherited)); R&D - Haemostasis (Norm Mancuso); R&D - Haemostasis (Vladimir Gurevich); R&D - Haemostasis (Yvette Citrine); R&D - Management (Germano Coppola); R&D - Technical Operations (Robert Forrest); R&D - Technical Operations Senior Scientist (FRIEDA FEHR); R&D - Technical Operations Senior Scientist (Mary Alaveras); R&D - Virology (Connie Broumis); R&D - Virology (Rachael Ross); R&D - Virology (Randel Fang (Inherited)); R&D - Virology (Randel Fang); R&D - Virology (Randel Fang) (Randel Fang); R&D - Virology (Trudi Wentzel); R&D Bioanalytics BMW (Sue Amatayakul-Chantler); R&D Biostatistics & Data Management AUS (Vince Matassa); R&D Biostatistics & Data Management US (Hongyu Liu); R&D Biostatistics Programming (Daphne Ewing); R&D Breakthrough Technologies BMW (Germano Coppola (Inherited)); R&D Breakthrough Technologies BMW (Joseph Bertolini); R&D Breakthrough Technologies BMW (Viv Louzado); R&D Business Operations (Christian DiDio); R&D CMC & Compliance (Chaaya Ganorkar); R&D CMC & Compliance (Michele Fischer Heintz (Inherited)); R&D CMC & Compliance (Wendy Su); R&D Cell Based Influenza Vaccines (Brett Ashley Leav); R&D Cell Based Influenza Vaccines (Brett Leav); R&D Cell Based Influenza Vaccines (Deborah Molrine); R&D Clinical Business Operations (Christian DiDio); R&D Clinical Compliance & Training (Roberta Duncan (Inherited)); R&D Clinical Development (Jonathan Edelman); R&D Clinical Development, BOSS-CC (Roberta Duncan); R&D Clinical Operations (Veronica Suarez (Inherited)); R&D Clinical Operations Pandemic (Mary Smith); R&D Clinical Operations Pandemic (Mirjam van Huffelen (On Leave)); R&D Clinical Operations Pandemic (Mirjam van Huffelen); R&D Clinical Operations Seasonal (Olivia Crayne); R&D Clinical Safety & Pharmacovigilance (James Milligan); R&D Clinical Safety & Pharmacovigilance (Russell Basser); R&D Clinical Safety & Pharmacovigilance (Sylvie Tomczyk); R&D Clinical Vaccine Management & Serology (Francesco Bedani); R&D Data Management, Coding & Standards (Renate Verbeeten - van Hoof); R&D Data Management, Coding & Standards (Renate Verbeeten-van Hoof); R&D Development Liverpool (April Sena); R&D Epidemiology (Mendel Haag); R&D Finance (Eleanor McQuisten); R&D Finance (Emma Walsh); R&D Formulation & Delivery (HUI LIU); R&D Formulation & Delivery (Hui Liu); R&D Global CMC Standards & Harmonisation (Rima Youil); R&D Global CMC and Compliance (Michele Fischer Heintz); R&D Global CMC and Compliance (Michele Heintz); R&D Global Medical Affairs (Gregg Coveney Sylvester); R&D Global Medical Affairs (Gregg Sylvester); R&D Global Strategic Labelling (Helen Cowdery); R&D Human Resources (Andrea Resch (Inherited)); R&D Human Resources (Kimberly Golden); R&D Human Resources (Paula Foord); R&D Human Resources MBR (Andrea Resch); R&D IT Solutions (John Cornelius); R&D Immunology (Gillis Otten); R&D Immunology (Gillis Robert Otten); R&D Influenza Vaccines Pandemic (Matthew Hohenboken); R&D Influenza Vaccines Seasonal (Esther Heijnen); R&D Influenza Vaccines Seasonal (Igor Smolenov); R&D Influenza Vaccines Seasonal (Jonathan Edelman (Inherited)); R&D JAPAN (Haruo Kitado); R&D Licensing (Andrea Huggins); R&D Medical Affairs, Americas (Ashesh Gandhi); R&D Medical Affairs, Americas (Ashesh J Gandhi); R&D Medical Affairs, Canada (Ashesh Gandhi (Inherited)); R&D Medical Affairs, Europe (Sankarasubramanian Rajaram); R&D Medical Affairs, Influenza (Karita Ambrose); R&D Medical Affairs, Rapivab (Daniele Gelone); R&D Medical Communications, US (Nancy Dougherty); R&D Medical Science Liaison Canada (James Mansi); R&D Microbial & Molecular Biology (Pirada Suphaphiphat); R&D Operations - Influenza (Denis Thomas); R&D Operations - Influenza (Lynda Allan); R&D PM Leadership (Nancy Fetrow); R&D PV Compliance & Excellence (Liz Pound); R&D Pharmacovigilance Operations (Jefferson Guillon); R&D Pharmacovigilance Operations (Lynn Gabb); R&D Pharmacovigilance Operations (Sylvie Tomczyk (Inherited)); R&D Pharmacovigilance and Risk Management (Maria Maddalena Lino); R&D Process Development BMW (Karl McCann); R&D Process Development BMW (Per Hansen); R&D Process Science Liverpool (Kulwinder Banger); R&D Project Management (Julie Waterbury); R&D Project Management - BRN (Michael Exner); R&D Project Management Development Projects (Nancy Fetrow); R&D Project Management Qvax, Patch, Research (Heather Davis (Inherited)); R&D Project Operations (David Leacy); R&D Protein Biochemistry (Changkeun Lee); R&D Protein Biochemistry (Yingxia Wen); R&D QA Systems (Karen Gard''ner (Inherited)); R&D QA Systems (Liz Pound); R&D QA Systems (Sarah S Lemons); R&D Quality (Karen Gard''ner); R&D Quality (Kendra Bossio); R&D Quality Management (Georgina Dimovski); R&D Quality Management (Jackie Desengano); R&D Quality Management (Jonathan Wooley); R&D Quality Management (Malcolm Tipping); R&D Quality Management (Mandy Jergovic); R&D Quality Management (Mary Nasopoulos); R&D Quality Management (Matthew Dickie); R&D Quality Management (Vicky Lioutas); R&D Quality Management Kankakee (Chris Lubben); R&D Quality Management Marburg (Ariane Korzen); R&D Quality Management Marburg (Ingo Brand); R&D Quality Marburg 1 (Rainer Kufka); R&D Regulatory Affairs (Susan Cameron-Laxton); R&D Regulatory Affairs Adjuvents (Hs-yen Liu); R&D Regulatory Affairs Adjuvents (Hs√º-yen Liu); R&D Regulatory Affairs Seasonal EMEA (Dalila Dolfi); R&D Regulatory Affairs US Cell-Based Products (Yael Johnson); R&D Regulatory Affairs US Pandemic (Natasha Getz); R&D Regulatory Affairs, EMEA (Monica Pagni); R&D Regulatory Affairs-US (Debbie DeMuria); R&D Regulatory Affairs-US (Kevin Darryl White); R&D Regulatory Affairs-US (Susan Cameron-Laxton (Inherited)); R&D Regulatory Affairs-US -Cambridge (Peggy Charpie); R&D Research (Ethan Settembre); R&D Research Executive Admin-Cambridge (Jane Davis); R&D Research Strategy & Operations (Rebecca Servais); R&D Serology (Giuseppe Palladino); R&D Site Management & Monitoring US/EU (Veronica Suarez (Inherited)); R&D Statistics & Data Management (Leah Isakov); R&D TD Analytical Process Testing (Tanya Riggins Clemmer); R&D TD Analytical & Drug Product Development (Dan Speelman); R&D TD Analytical (Ying Zhang); R&D TD Analytical Biochemistry (Tanya Clemmer); R&D TD Analytical Biophysical (Jiang Qian); R&D TD Analytical Cell & Molecular (Prakash Koodathingal); R&D TD Analytical Immunoanalytics (Jesse Bodle); R&D TD Analytical Immunoanalytics (Kiki Vukanovska); R&D TD Analytical Method Development I (Bryan E Hart); R&D TD Analytical Method Development I (Bryan Hart); R&D TD Analytical Method Development II (Dan Speelman); R&D TD Analytical Process (Lan Feng); R&D TD Analytical Process Testing (Tanya Clemmer); R&D TD Analytical Process Testing (Tanya Riggins Clemmer); R&D TD Analytical Separation Science (Prakash Koodathingal (Inherited)); R&D TD BPD Drug Product (Lan Feng); R&D TD BPD Product Expertise (Rochelle Bazemore); R&D TD BPD Project Management & Lab Operations (Perciliz Ahern); R&D TD BPD Purification Development (Christopher Dadd); R&D TD BPD Purification Development I (Debbie Lydiard); R&D TD BPD Upstream (Ryan Thurston); R&D TD BPD Upstream Cell Culture Development (Leslie McSweeney); R&D TD Biologics Process Design (Keith Kulowiec); R&D TD Clinical Trial Manufacturing (Keith Kulowiec (Inherited)); R&D TD Downstream Labs (Debra Lydiard); R&D TD Product Expertise (Rochelle Bazemore); R&D TD Project Manager (Lourdes Barnes); R&D TD Project Manager (Perciliz Ahern); R&D TD Purification Development (Christopher Dadd); R&D TD Purification Development HS (Christopher Dadd (Inherited)); R&D TD Purification Development HS (Debbie Lydiard); R&D TD Purification Development HS (Debra Lydiard); R&D TD Purification Development HS (Matthew Brian Smith); R&D TD Technical and Business Services (Katherine Whitley); R&D TD Technical and Business Services (Keith Kulowiec (Inherited)); R&D TD VICE Core Virology (Christine Wadey); R&D TD VICE Core Virology Holly Springs (Christopher Gully); R&D TD VICE Core Virology Holly Springs Commercial (Charles McGee); R&D TD VICE Core Virology Parkville Seed Development (Brad Dickson); R&D TD VICE Core Virology Parkville Seed Development (Lynda Allan); R&D TD VICE Molecular Virology (Catherine Agius); R&D TD VICE Molecular Virology (Chi Ong); R&D TD VICE Molecular Virology Hybridoma & Microscopy (Erin Verity); R&D TD VICE Molecular Virology Hybridoma (Kirsten Vandenberg); R&D TD VICE Molecular Virology Microscopy (Stephen Asquith); R&D TD Virology & Cell Culture (Avishek Nandi); R&D TD Virology & Cell Culture (Ryan Thurston); R&D TD Virology & Cell Culture Sub-Group II (Gwen Truong-Royce); R&D TD Virology & Cell Culture Sub-Group II (Ryan Thurston (Inherited)); R&D TD Virology & Cell Culture Sub-Group III (Leslie McSweeney); R&D TD Virology & Cell Line Sub-Group I (Christopher Gully); R&D TD Virology & Cell Line Sub-Group I (Christopher Patrick Gully); R&D TD Virology & Immunology (Steven Rockman); R&D Technical Development (Ambarish Shah); R&D Technical Development (Russell Basser (Inherited)); R&D Technical Development (Russell Basser); R&D Technical Development (Scot Shepard); R&D Technical Development, Analytical & Drug Product Development (YING ZHANG); R&D Technical Development, Analytical & Drug Product Development (Ying Zhang); R&D Technical Development, Holly Springs (Keith Kulowiec); R&D Technical Development- Holly Springs (April Sena); R&D Technical Operations BMW (Germano Coppola (Inherited)); R&D Technical Operations BMW (Norm Mancuso); R&D Technology Transfer Marburg (Falk Weihmann); R&D Toxicology (Ethan Settembre (Inherited)); R&D Transplant TA (Laurie Lee); R&D and Capital Controlling (Stephan Ludovici); R&D eClinical Technology (John Edward Cornelius); R&D/ G&A Business Partners (Ken Lim (Inherited)); RA CMC & Compliance (Ana Moisidis); RA CMC & Compliance (Pete Campbell); RA CMC & Compliance (Sahra Zanetti); RA CMC Liverpool (Joanne Beighton); RA, China (Jeep Wang ????); RCB MBR Central Lab (Annette Feussner); RCB MBR Central Lab (Helene Lang); RCB MBR Central Lab (Maria Hauswald); RE Services (Dianne Leppanen); REC 1 (Marco Hofmann); REC 2 (Philipp Claar); REC 3 (Holger Lind); REC Gene Therapy (Bala Sai Sundarasetty); REI Europe (Samuel Hou); RI Research & Innovation (Thomas Nowak); RI ¬ñ Research & Innovation (Thomas Nowak); RPL PTI (Hans Raess); RSO, RQO and RA Emerging Markets (Dirk Hoheisel (Inherited)); Racine 065 (Carl L Hutton); Racine 065 ACM Area 1 (Nicole Robinson); Racine 065 ACM Area 2 (Lemina Billups); Racine 065 QA (Megan E Hoffman); Racine 065 QA (Megan Hoffman); Rainbow City 275 (Devyn Bryant); Rainbow City 275 ACM Area 1 (Sacashla Hampton); Rainbow City 275 ACM Area 2 (Ladricka Weatherspoon); Rainbow City 275 QA (Malcolm-Bryce Richbourg); Raleigh 231 (Derek Erhart (Inherited)); Raleigh 231 (Nathan Farcasin); Raleigh 231 ACM Area 1 (Joseph Jackson); Raleigh 231 ACM Area 2 (Deanna Anderson); Raleigh 231 QA (Braxton Summers); Rapid City 288 (Brendon Sato); Rapid City 288 ACM Area 1 (Brendon Sato); Rapid City 288 ACM Area 1 (Marc Sipma); Rapid City 288 ACM Area 2 (April Miller); Rapid City 288 QA (Buck Schiley); Raw Material Acceptance Chemistry (Michelle Reckerman); Raw Material Contro/lMonitoring (Dominic Wuest); Raw Material Control/Monitoring (Dominic Wuest); Raw Material Control/Monitoring 2 (Katrin Becker); Reception / Alarmsystem (Claudia Pereira-Buehler); Recombinant Coagulation R&D Manufacture (Steven Honey (Inherited)); Recombinant Operations Support (Vicky Pirzas (Inherited)); Recombinant Portfolio Team LGN (OLGA SARNOWSKA); Recombinant Portfolio Team MBR (Anne-Regine Herboth); Recombinant Product Development (Anthony Stowers); Recombinant Product Development, Marburg (Richard Alldread); Recombinant Product Development, Marburg R&D Operation and Services (Christian Schlachtbauer); Recombinant Product Development, Marburg Vector Development (Holger Laux); Recombinant Product Development-Pasadena (Andreas Gille); Recombinant Technologies Marburg (Peter Schmidt); Records & Reporting (Boris Kaiser (Inherited)); Records & Reporting (Caroline Roost); Records & Reporting (Ivan Poffet); Region 9 New Center Operations & Support (Amanda L Kitchen); Regional Demand Planning Europe (Lukas Limbach); Regional HR Ops AUS (Clare McCann); Regional HR Ops AUS (Miya Chiba); Regional HR Ops Americas (Mark Hickenbottom (Inherited)); Regional HR Ops Americas (Rita Gross); Regional HR Ops EMEA (Stephan Schufele); Regional HR Ops EMEA (Stephan Sch√§ufele); Regional HR Ops Europe (Stephan Sch√§ufele); Regional Head Americas (Kristin McCarthy); Regional Head Clinical Operations (Jacqui Cumming); Regional Head EU APAC (Mimi Ermens); Regional Innovation Operations (Carmon Kieffer); Regional Labeling (Barbara Peruche); Regional Labeling EU (Katrin Rdiger); Regional Labeling EU (Katrin R√ºdiger); Regional Labeling EU-INT (Katrin R√ºdiger); Regional Labeling INT (Irina Sviriaeva); Regional Labeling Lead, North America (Maricarmen Dilone-Raposo); Regional Medical Affairs Operations Manager (Andrew Stork); Regional Medical Affairs Operations Manager (Rosanda Buljubasic); Regional Procurement (Lucas Jinnette); Regional Quality Support for Eastern and Central Intercontinental Commercial Operations (Jonathan Imhof); Regional Safety Officer - ECI (Marta Puente); Regional Sales 1 (Fernando Marcos V Leony); Regional Sales 2 (Rafael Esteves); Regional Sales 3 (Claudia Bueno); Regional Sales Immunology & Respiratory (Heinrich Feischen); Regional Sales Mitte Hospital (Holger Milkereit); Regional Sales Office Berlin (Bernhard Czapla); Regional Sales Office Berlin (Claudia Bachmann); Regional Sales Office Bochum (Heinrich Feischen); Regional Sales Office Frankfurt (Holger Milkereit); Regional Sales Office Hannover (Michael Bernd Rode); Regional Sales Office Munich (Susanne Mller); Regional Sales Office Munich (Susanne M√∂ller); Regional Sales Office Ost Hospital (Frank Buttchereit); Regional Sales Office West Hospital (Ralf Kosmol); Regional Sales Ost Immunology & Respiratory (Claudia Bachmann); Regional Study Management, Americas (Danielle Dalton (Inherited)); Regional Study Management, Americas (Ross Watson (Inherited)); Regional Study Management-Americas (Ross Watson (Inherited)); Regional Supplier Qlty - Bern (Peter Stettler); Regional Supplier Qlty - Kankakee (Elizabeth Queiro); Regulat. Coordination Russia & CIS (Vsevolod Nikolaev); Regulat.-, Quality- & Safety Coord.EEMEA (Camilla Shen (Inherited)); Regulat.-, Quality- & Safety Coord.EEMEA (Christine Danila); Regulation Intelligence, Knowledge and Training (Sara Mesiano); Regulation, Training & Knowledge Sharing (Vicky Gakias); Regulatory (Haruo Kitado (Inherited)); Regulatory (Satoshi Koike - ); Regulatory (Satoshi Koike ??? ? - ??? ????); Regulatory Affairs & Lab Operations (Jon Knowles); Regulatory Affairs (Doris Friedl); Regulatory Affairs (Jane Wang ????); Regulatory Affairs (Joyce P Castaneda); Regulatory Affairs (Kate Burke); Regulatory Affairs AU/NZ Dev Prod BMW (Kellie Hooley); Regulatory Affairs AU/NZ (Gosia Kupczyk); Regulatory Affairs AU/NZ (Neama Baho); Regulatory Affairs AU/NZ Dev Prod BMW (Kellie Hooley); Regulatory Affairs Asia (Queenie Ho); Regulatory Affairs Benelux (Patrick Reygaert (Inherited)); Regulatory Affairs Benelux (Roel Mallants); Regulatory Affairs France (Christine Roche [C]); Regulatory Affairs France (Laurence Vidal); Regulatory Affairs Greece (Penelope Terentiou); Regulatory Affairs Italy (Roberto DeBenedetto); Regulatory Affairs MEA (Haydi Ibrahim); Regulatory Affairs Mgr Global Labelling Ops (Laura Vanzan); Regulatory Affairs Nordic (Elin Wobbeking); Regulatory Affairs Nordic (Ulf Hultquist (Inherited)); Regulatory Affairs Spain (Julian Fierro); Regulatory Affairs UK (Helen Watts); Regulatory Coordination Africa & EEU (S√©verine Caillet); Regulatory Coordination GLAD (Sverine Caillet); Regulatory Coordination GLAD (S√©verine Caillet); Regulatory Intelligence & Policy (Bettina Doepner); Regulatory Operations, Compliance and Business Excellence (Valeria Graffeo); Regulatory Operations, Compliance and Business Excellence - HS (Detra Bullock); Regulatory Reg. Lead NA EP (Baldevsinh Rana (Inherited)); Release Bulk & Filling (Joachim Leiss); Release FRAKT/ALB/Rho (Christine Peter); Release IG/CYT (Ines Joachim); Reno 502 (Susan Gonzalez); Reno 502 ACM Area 1 (Dwayne Majette); Reno 502 ACM Area 2 (Lauren Clapham); Reno 502 QA (Chermaene Mathis); Reporting & Planning (Konstantin Petropoulos (Inherited)); Reporting A-IFRS & German GAAP, Taxes (Angelika Godosar); Requalification & Stability (Angela Hamrock-Fox); Requalification & Stability (Ilija Najdovski); Requalification & Stability (Judith Kennedy); Research; Research & Clinical Bioanalytics (Bradley Sedgmen); Research & Clinical Bioanalytics (Kirstee Martin); Research & Clinical Bioanalytics (Marit Lichtfuss); Research & Clinical Bioanalytics (Meaghan FitzPatrick); Research & Development (Douglas Lee); Research & Development Bern (Liane Hoefferer); Research & Development Bern (Nathan Roth); Research & Development II (Norbert Schulze); Research (Adele Barr); Research (Adrian Zuercher (Inherited)); Research (Adriana Baz Morelli); Research (Alexander Karnowski); Research (Anabel Silva); Research (Andrew Hammet); Research (Andrew Nash); Research (Anne Verhagen (Inherited)); Research (Anne Verhagen); Research (Arna Andrews); Research (Brodie Miles); Research (Catherine Owczarek); Research (Chao-guang Chen (Inherited)); Research (Chao-guang Chen); Research (Con Panousis); Research (Eugene Maraskovsky); Research (Glenn Powers); Research (Greg Bass); Research (Hadi Lioe); Research (Helen Cao); Research (Ian Campbell); Research (Ineke Muir); Research (Ingela Vikstrom); Research (Ingrid Lonnstedt); Research (JANE ARTHUR); Research (Jason Simmonds); Research (Jenny Chia (On Leave)); Research (Jenny Chia); Research (Judith Field); Research (KOLJA SCHAALE); Research (Katherine Monaghan (On Leave)); Research (Katherine Monaghan); Research (Kerstin Emmrich); Research (Kirsten Edwards); Research (Larissa Provan); Research (Lidija Turkovic); Research (Mae-Xhum Wong); Research (Marco Weinberg); Research (Mark Biondo); Research (Mark Liddament (Inherited)); Research (Mark Liddament); Research (Martin Pearse); Research (Matthias Pelzing); Research (Mhairi Maxwell); Research (Michael Wilson (Inherited)); Research (Michael Wilson); Research (Michael Yan); Research (Milica Ng (Inherited)); Research (Milica Ng); Research (Natasha Pereira); Research (Nick Wilson); Research (Peter Schmidt); Research (Pierre Scotney); Research (Pino Maccarone); Research (RAJESH GHAI); Research (Rebecca Butcher); Research (Sabine Rauth); Research (Sandro Prato); Research (Saw Yen Ow); Research (Shirley Taylor); Research (Srikanth Budnar); Research (Steven Dower (Inherited)); Research (Steven Dower); Research (Steven Lee); Research (Victor Turnbull); Research (Walid Azar); Research (Wei Hong Toh); Research 1 (Hannah Chu); Research 1 (Mihee Kim); Research Bern (Adrian Zuercher); Research Bern Platforms (Christoph R√∂sli); Research Bio21 (Michael Wilson); Research Data Science (Milica Ng); Research I () (Wenting Zhao); Research I (Chao-guang Chen); Research II (Victor Turnbull); Research III (Mark Liddament); Research IV (Marco Weinberg); Research Innovation (Marthe D''Ombrain); Research Marburg (Thomas Weimer); Research Marburg Diverse (Thomas Weimer (Inherited)); Research Scientist - Bioinformatics (Monther Alhamdoosh); Research Therapeutic Area (Eugene Maraskovsky); Research and Clinical Bioanalytics (Allison Dyson); Research and Clinical Bioanalytics (Andreas Gille); Research and Clinical Bioanalytics (Anthony Roberts); Research and Clinical Bioanalytics (Elena Velkoska); Research and Clinical Bioanalytics (Kirstee Martin (Inherited)); Research and Clinical Bioanalytics (Kirstee Martin); Research and Clinical Bioanalytics (Roslyn Davis); Research and Clinical Bioanalytics (Tim Green); Research and Clinical Bioanalytics 1 (Lisa Lindqvist); Research and Development, China (Zak Huang); Research and Laboratory (Andrew Isaac); Research,Therapeutic Area C&M (Bronwyn Kingwell); Resp. Apprentices Bio Lab Techn. (Wim Etter); Respiratory TA (Heike Thiele); Results & Analysis (Jonathan Matty); Results & Analysis (Kenneth Walsh); Results & Analysis I (Jonathan Matty); Results & Analysis II (Jonathan Matty); Review IG/CYT (Armin St√∂cklin); Review IG/CYT (Thomas Kilchoer); Rhophylac (Andrea Stauffiger Eggli); Rhophylac Bulkmanufacturing (Andr Wegmueller); Rhophylac Bulkmanufacturing (Andr√© Wegmueller); Rhophylac Bulkmanufacturing 2 (Reto Stucki); Rialto 507 (Robert Ellison III); Rialto 507 QA (Derek Erhart (Inherited)); Risk & Mitigation Management (Malte Krmer); Risk & Mitigation Management (Malte Kr√§mer (Inherited)); Risk & Mitigation Management (Malte Kr√§mer); Risk & Project Management (Uli Kiefer); Riverside 299 (Iiemmaue Morgan); Riverside 299 QA (Anne Tran); Riviera Beach 115 (Martel Carter); Riviera Beach 115 (Nakia J Harlan); Riviera Beach 115 (Nedra N Braden); Riviera Beach 115 ACM Area 1 (Ethan Johnson); Riviera Beach 115 ACM Area 2 (JASON TRUMBACH); Riviera Beach 115 QA (Bill Angelucci); Riviera Beach 115 QA (Stalmore Duncan); Rochester 232 (Kay Schwartz); Rochester 232 (Lin Macaluso); Rochester 232 ACM Area 1 (Marissa Peterson); Rochester 232 ACM Area 2 (Michelle Draper); Rochester 232 ACM Area 2 (Michelle Horan); Rochester 232 QA (K.C. McCaffery); Rochester 232 QA (Karen Weatherston); Rock Hill 130 (Damon Lehr); Rock Hill 130 (Nicole M Adams); Rock Hill 130 ACM Area 1 (Ashley Pinckney); Rock Hill 130 ACM Area 2 (Brittney Joiner); Rock Hill 130 QA (Bianca M Brunson); Rock Hill 130 QA (Damon Lehr); Rock Island 426 (Josh Buzzell); Rock Island 426 ACM Area 1 (Chandler J Johnson); Rock Island 426 ACM Area 2 (James Rathmann); Rock Island 426 QA (Jennifer D Anthony); Rockford 200 (Kristi Davis); Rockford 200 (Sherylene A Lloyd); Rockford 200 ACM Area 1 (Kristi Davis (Inherited)); Rockford 200 ACM Area 1 (Kristy Carlson); Rockford 200 ACM Area 2 (Paul Crawford); Rockford 200 QA (Amanda Sawlsville); Rome 298 (Marida L Bivens); Rome 298 ACM Area 1 (Salvador Reyes); Rome 298 ACM Area 2 (Matt Comfort); Rome 298 QA (Samantha D Beach); Rome 298 QA (Stephanie D Shah (Inherited)); Roseville 077 (Charles LaVell Jacobs); Roseville 077 (Kourtney Davis); Roseville 077 ACM Area 1 (Charles LaVell Jacobs (Inherited)); Roseville 077 ACM Area 1 (Porsche M Goldsmith); Roseville 077 ACM Area 2 (Natalie King); Roseville 077 QA (Kayla D Lindley); Roseville 077 QA (Nichole M Clay (On Leave)); Roseville 077 QA (Nichole M Clay); Routine Systems (Martene Bond); Ruide Wuhan EHS&S (zcan Campinar); Ruide Wuhan Production (√ñzcan Campinar); Russia & CIS (Maria A Lituchaya); SAP Business Processes (Maike Pollaschek (Inherited)); SAP Business Processes (Maike Pollaschek); SAP Competency Center (Helen Baker); SAP Competency Center (Jonathan Turner); SAP Extended Warehouse Management (Riju Varghese); SAP Finance team (Jonathan Young); SAP Manufacturing Support Team (Manohar Venkataraman); SAP Master Data (Paul Aberson); SAP Quality / Logistics Team (Helen Baker (Inherited)); SAP Quality / Logistics Team (Matthew Gordon); SAP Service and Release (Martin Eldred); SAP Solution Center Bern (Mourad Boulanouar); SAP System Admin (John McCorry); SI Manufacturing (Jason Vaughn); SI Manufacturing (Samuel Jackson); SI Validation (Michael Donley); SI Validation (Robert Musgrave); STA Gruppe I (Claudia Schwab); STA Gruppe I (Stefanie Grafmller); STA Gruppe I (Stefanie Grafm√ºller); STA Gruppe II; STA Gruppe II (Susan Blaser); STA Stability (Manuel Wohde); STA Stability 2 (Barbara Gmann); STA Stability 2 (Barbara G√∂√ümann); STA Stability 3 (Gernot Kissel); STA Stability 4 (Svenja Nieba); STA Stability 5 (Milosz Krala); STA Stability 6 (Oliver Kupski); STA Stabilit√§t, QFP (Annette R√∂hrenbeck); STA Stabilit√§t, QFP (Barbara G√∂√ümann); STA Stabilit√§t, QFP (Christian Sinnen); STA Stabilit√§t, QFP (Gernot Kissel); STA Stabilit√§t, QFP (Julia Kufka); STA Stabilit√§t, QFP (Manuel Wohde); STA Stabilit√§t, QFP (Svenja Nieba); Safety (Alexandra Nogal); Safety (Allan Wise (Inherited)); Safety (Chris Meeks); Safety (Rolf Ingold); Safety (Steven Hull); Safety Risk Management (Max Waschbusch); Safety Risk Management (Pradeep Kumar Sahu); Safety Risk Management (Susan Welsh (Inherited)); Safety Risk Mgt (Alison Graves Jones); Safety Risk Mgt (Astrid Schneider); Safety Risk Mgt (Daphne Sawlwin); Safety Risk Mgt (Gabriele Neumann); Safety Risk Mgt (Joseph Whitten); Safety Risk Mgt (Kristy Van Dinther); Safety Risk Mgt 1.0 (Gabriele Neumann); Safety Risk Mgt 2.0 (Beate Greene); Safety Sciences (Haley Kaplowitz); Safety Systems Projects (JANET AUERBACH); Safety ¬ñ EU/APAC (J√ºrgen Kanand); Saginaw 169 (Amy Railling); Saginaw 169 (Ashley M Jamieson (Inherited)); Saginaw 169 (LC Davis); Saginaw 169 (Latosha Y Floyd (Inherited)); Saginaw 169 ACM Area 1 (TAYLOR GOODWINE (On Leave)); Saginaw 169 ACM Area 1 (TAYLOR GOODWINE); Saginaw 169 ACM Area 2 (Scott Walker); Saginaw 169 QA (Nicole Naji); Saginaw 282 (DaWanna Smith); Saginaw 282 ACM Area 1 (Genesha Curry); Saginaw 282 ACM Area 2 (Andrea Bragg); Saginaw 282 QA (Darren Hall); Salem 221 (Cory Vierck); Salem 221 (Paige N Zafran); Salem 221 (Timothy Freeland Jr (Inherited)); Salem 221 ACM Area 1 (Brandon D Biery); Salem 221 ACM Area 2 (Edward Baye); Salem 221 QA (Rachel R Maddox); Salem 221 QA (Timothy Freeland Jr (Inherited)); Sales & Marketing (Andrea Bennett); Sales & Marketing (Joe Dempsey); Sales & Marketing (Kaye Nolan); Sales & Marketing (Kirsten Comer); Sales & Marketing (Tanja Wells); Sales & Marketing Turkey (Ahmet Can Kalenderoglu); Sales & Private Accounts & Tender Office (Franco Gatta); Sales & Private Accounts & Tender Office (Massimo Leoni (Inherited)); Sales & Private Accounts & Tender Office (Massimo Leoni); Sales (Beata Szymanska-Czyz); Sales (Jorge L Gast√©lum (Inherited)); Sales (Jorge Marco); Sales (Markus Wenninger); Sales (Saul Ortiz Carrillo); Sales (Virgile Grosjean); Sales - CSL Behring Taiwan (Frank Ko ); Sales - CSL Behring Taiwan (Frank Ko ?????); Sales 2 (Claudia Sanchez); Sales 3 (Gema Gonzalez); Sales Benelux IG & CC (Marijke Maes); Sales Benelux IG & CC (Philip Vandromme); Sales Denmark / IC (Mette Toft Jacobsen); Sales Division (Izumi Yoshida ??? ??? - ??? ????); Sales Division (Toshio Nagata); Sales Division CAB Central Japan Area (Yoshifumi Umenaka); Sales Division CAB East Japan Area (Takahiro Tsuruta); Sales Division CAB West Japan Area (Akihiro Enomoto); Sales Division Critical Care & Acquired Bleeding T.A. (Hiroyoshi Iwamoto); Sales Division HEM East Japan Area (Atsuhiko Arikata); Sales Division HEM Kansai & Chubu Area (Shinichi Kano); Sales Division HEM Shutoken Area (Takayuki Takigawa); Sales Division HEM West Japan Area (Taisuke Miyakoshi); Sales Division Hemophilia TA (Hideki Yanagihashi ??? ?? - ????? ????); Sales Division Hemophilia TA (Takayuki Ishii); Sales Division IRD Central Japan Area (Takayuki Azuma); Sales Division IRD East Japan Area (Madoka Yamamoto); Sales Division IRD Kansai & Hokuriku Area (Takahiro Miura); Sales Division IRD Shutoken Area (Hironori Fujioka - ); Sales Division IRD Shutoken Area (Hironori Fujioka ??? ?? - ???? ?????); Sales Division IRD West Japan Area (Hiroki Nagayasu); Sales Division Immunology & Rare Diseases T.A. (Naoki Ikeguchi); Sales Division Kcentra Team (Tomokazu Shiroza); Sales Division SID T.A. Ig/Hematology Group (Izumi Yoshida ??? ??? - ??? ????); Sales Division SID T.A. Ig/Hematology Group (Kenichiro Yamaguchi - ); Sales Division Sales Planning & Wholesaler Management (Hideki Yanagihashi - ); Sales Division Sales Planning (Takayuki Ishii); Sales Division Sales Planning Wholesaler Management, Sales Admin (Hisako Sakoda); Sales Division Sales Planning,Sales Admin Group (Hisako Sakoda); Sales Division Tentative Team (Hiroyoshi Iwamoto); Sales Division Wholesaler Management Customer Support Team (Kyohei Yamamoto); Sales Division Wholesaler Management Distributor Team (Kyohei Yamamoto); Sales Finland (Sirpa Reimari); Sales Force Center (Flavio Di Pietro); Sales Force North (Paolo Francesco Corsi); Sales Force South West (Renato Monteleone); Sales France (Emmanuelle Massonie); Sales France (Franck Puget (Inherited)); Sales France (Karim Abassi); Sales Greece (Christos Fouskotis); Sales Hospital (Popp Gbor); Sales Hospital (Popp G√°bor); Sales Immunology & Coagulation (Kadar Attila); Sales Management Hemophilia (Michael Schulz (Inherited)); Sales Management Hemophilia (Michael Schulz); Sales Norway (Kjell Anker Worren); Sales Operations and Data (Chris Meyer); Sales Spain (Toni Par√©s); Sales Sweden (Nicklas Wallin); Sales Team Belgium (Marijke Maes (Inherited)); Sales Team Belgium (Philip Vandromme (Inherited)); Sales Team France (Emmanuelle Massonie (Inherited)); Sales Team France (Franck Puget (Inherited)); Sales Team France (Karim Abassi (Inherited)); Sales Team Netherlands (Marijke Maes (Inherited)); Sales Team Netherlands (Philip Vandromme (Inherited)); Sales Team UK (John Liam Boyle); Sales Team UK 2 (Nicky Whiteley); Sales Training Manager (Phil Hutton); Salt Lake 627 (Brooke A Neuroth); Salt Lake City 627 (Dave Lynn); Salt Lake City 627 (Marc D Fisher (Inherited)); Salt Lake City 627 (Nate Justet); Salt Lake City 627 ACM Area 1 (Michael E Forman); Salt Lake City 627 ACM Area 2 (Andrew V Lopez); Salt Lake City 627 ACM Area 2 (Ross R Fredrickson); Salt Lake City 627 QA (Allison M Davis); Sample Logistics (Brigitte Harris); Sample Management Quality Control (Christoph Wyss); Sample logistics (Christine Beyeler); Sampling (Lachlan McDonald); San Antonio 087 (Becca Charles); San Antonio 087 (Jennifer Martinez); San Antonio 087 ACM Area 1 (Kamala Yevetta Brown); San Antonio 087 ACM Area 2 (Aaron Thornton); San Antonio 087 ACM Area 2 (Kamala Yevetta Brown); San Antonio 087 QA (Alicia D Conner); San Antonio 157 (Sara A Anderson); San Antonio 157 (Sara Saleem); San Antonio 157 ACM Area 1 (Erika Gonzalez); San Antonio 157 ACM Area 2 (Crystal N Morton-Rollins); San Antonio 157 QA (Brenton Ferguson); San Antonio 157 QA (Nakia J Harlan); San Luis 158 (Javier Luna); San Luis 158 ACM Area 1 (Cristina Silva); San Luis 158 ACM Area 2 (Paulina Pena); San Luis 158 QA (MARA TAFOYA); San Luis 158 QA (Miguel Palomera); Sanitation (Union) (Adila Zaidi); Sanitation (Union) (Michael Memenga (Inherited)); Scanton 240 (Christopher Travalik (Inherited)); Schenectady 229 (Andrew Brammer); Schenectady 229 (Melissa Moore); Schenectady 229 ACM Area 1 (Renie Ball); Schenectady 229 ACM Area 1 (Ronald Cameron); Schenectady 229 ACM Area 2 (Karena Caraballo); Schenectady 229 QA (Sandy Nicholson); Schicht EMR (Gunthard Ludwig); Schicht EMR GH (Bjrn Krieg); Schicht EMR GH (Bj√∂rn Krieg); Schicht HW (Bjrn Krieg); Schicht HW (Bj√∂rn Krieg); Schicht HW (Christian Zeman); Schichtgruppe 1 ASQ (Marcus Heinzmann); Schichtgruppe 2 ASQ (Harald Ferber); Schichtgruppe 3 ASQ (Ruben Zinnkann); Schichtgruppe 3 ASQ (Svea Bieker); Schichtgruppe 4 ASQ (Gerhard Senftner); Scientist (Theresa Qiu); Scientist (Tony Cardno); Secondary Asset Care & Reliability (William Murphy); Secondary Automation (Muhammad Khan); Secondary Engineers (Calum Courtney); Secondary Manufacturing (Tristan Betson); Secondary Manufacturing Support Technicians (Gerard Lopez); Secondary PMO (Carrie O''Keeffe); Secondary PMO (Duncan Benson); Secondary Programmes (CARRIE OKEEFFE (Inherited)); Secondary Projects QA (Karen Marks); Secondary Projects Tech Transfer (Freddie Wayne West); Secondary Projects, Liverpool (CARRIE OKEEFFE); Secondary Projects, Liverpool (Carrie O''Keeffe); Secondary, Utilities and QC Validation (Joao Silva Acioli); Secretary ES (Montserrat Rey); Security CSL Behring Australia (Sharon Carroll); Security Manager 281 (Az Raonaq); Security Manager 281 (Chris Riley); Security Manager 281 (Nicholas Moody (Inherited)); Security Operations (Ganesha Rajanaidu); Security Operations (Ram Narasimman); Sen Ass QA Batch Release (Carol Youssef); Sen Ass QA Batch Release (Chris Graves); Sen Ass QA Batch Release (Nancy Manolitsas); Sen Ass QA Batch Release (Olivia Fisher (Inherited)); Sen Ass QA Batch Release (Olivia Fisher); Sen Ass QA Batch Release (Zareena Shaik); Sen Assoc Contin/Re-Validation (Robert Alvarez); Sen Assoc Validation Operation (Michelle Botterill); Sen Assoc Validation Operation (Nick Morgan (Inherited)); Sen Assoc Validation Operation (Nick Morgan); Sen Mgr QA Batch Release (Darren Moulton); Senior Associate QA Batch Release (Joanna Davis); Senior Associate QA Batch Release (Josie Lanza); Senior Associate QA Capability (Brett Pool); Senior Associate QA Capability (Marcela Rojas (Inherited)); Senior Associate QC Support (Jo Karra); Senior Director Manufacturing (Chris Larkins); Senior Electrical Engineer (Anthony Wrzesinski (Inherited)); Senior Electrical Engineer (Nicholas Hall); Senior Electrical Engineer (Stanislaw (Stan) Hudy); Senior Electrical Engineering Manager (Claro Pellosis); Senior HR Business Partner (Devon Anderson); Senior HR Business Partner (Sharon Davoli); Senior HR Business Partner (Sonia Georgesz); Senior HR Business Partner (Sonia Pititto); Senior Manager QA Capability (Marcela Rojas); Senior Manager Validation (Dina El-Emary); Senior Manager Validation (Michelle Botterill); Senior Manager Validation (Russell Ciliento (Inherited)); Senior Manager Validation (Shane Bourne); Senior Manager, Innovation R&D (Chi Ong); Senior Process Engineer (Robert Hemaya); Senior Process Manager - Utilities (Christiaan Theron (Inherited)); Senior Process Manager - Utilities (Christiaan Theron); Senior Process Manager, Supply Chain (Helen Malliaras); Senior Process Manager, Supply Chain (Lachlan Cruise (Inherited)); Senior Project Manager (Anthony Wrzesinski (Inherited)); Senior Project Manager (Brian Guilly); Senior Project Manager (Raoul Gorris); Senior Regulatory Affairs Manager (Ana Moisidis); Senior Scientist (Albert Garcia Minambres); Senior Scientist (Armando Alabella); Senior Scientist (Kiki Vukanovska (On Leave)); Senior Scientist (Kiki Vukanovska); Senior Scientist (Kim Steegh); Senior Scientist (Maria Panayi); Senior Scientist (Matthew Hardy); Senior Scientist (Sachiyo Nishio); Senior Scientist (Tom Murray-Rust); Senior Scientist Biacore (Bernadine Lu); Senior Scientist EM Unit (Stephen Asquith); Separatoren (Arnd Vollmerhausen (Inherited)); Separatoren (Erkan nder); Separatoren (Erkan √ñnder); Seqirus Head of Legal, Asia (Marcus De Alwis); Seqirus Ltd (Anjana Narain); Seqirus Ltd (Gordon Naylor); Seqirus Ltd (Stephen Marlow); Seqirus Ltd II (Gordon Naylor (Inherited)); Serialisation Operations and Projects (Michel Stritt); Serialisierung (Michel Mller); Serialisierung (Michel M√ºller); Serialization & Anti-Counterfeiting (Andrew John Robinson); Serialization & Anti-Counterfeiting (Warren Comerford (Inherited)); Serology Lab (Dan Thompson); Serology Lab - 1st Shift (Undrea W Jenkins); Serology Lab - 3rd Shift (Angela C Reynolds); Serology Lab - Weekend (Undrea W Jenkins (Inherited)); Service Management (Jim Towarnicki); Servicecenter Hemophilia (Axel Hochfeld); Shared Area Engineering (Marc Herbener); Shreveport 245 (Johnnie Williams); Shreveport 245 (Marcia Schels); Shreveport 245 ACM Area 1 (Demetricia Moore); Shreveport 245 ACM Area 1 (Edgar Rodriguez); Shreveport 245 ACM Area 2 (Matt Comfort); Shreveport 245 ACM Area 2 (Rashunda Dock); Shreveport 245 QA (Kaci Miller); Single Case Management & PhV Systems (Jessica Corrall (On Leave)); Single Case Management & PhV Systems (Jessica Corrall); Single Case Mgt & Phv Systems (Liz Pound); Single Case Mgt & Sy (Sabine H√§rtel (Inherited)); Single Unit Verification 1 (Kai Wlk); Single Unit Verification 1 (Kai W√∂lk); Single Unit Verification 2 (Norbert Vollmerhausen); Single Unit Verification 3 (Karl-Heinz Stelzig); Site Administration (Deborah Lynes); Site Engineering Services (Alex Elandt [C]); Site Engineering Services (Alex St√§hli); Site Lead project Eureka (Chris Camilleri); Site Lead project Eureka (Philip Elliott (Inherited)); Site Logistics (Daniel Schmidt); Site Logistics Production (Igor Kaucher); Site Management (Ross Watson); Site PMO (Karen Mitchell); Site Security (Az Raonaq); Site Security (Matthias Gnther); Site Security (Matthias G√ºnther); Site Security Switzerland (Julien Lischer); Site Supply Chain (Dennis Finger); Site- & Project-Management (Ferdinand Marx); Site- & Project-Management (Marko Witt); Site- & Project-Management (Rainer Herbener); Smryna 123 QA (Apple Grace Swindell); Smyrna 123 (Stephen Jefferson); Smyrna 123 ACM Area 1 (Venessa Lucio); Smyrna 123 ACM Area 2 (Travis Conner); Smyrna 123 QA (Apple Grace Swindell); Snr Assoc, QA Batch Release (Abha Patel); Snr Director Quality PKV530 (Milka Smoljko); Snr Mgr, Bus Effectiveness (Collette Makdissi); Snr Reg Advisor (David Plant); Snr Scientist Flu Innov (Kirsten Vandenberg); Snr Scientist Influ.Innov 275 (Catherine Agius); Snr.Scientist Flu Pilot Facil (Brad Dickson); Solid Organ Transplant Marketing (Courtney Wilson); Solutions Team Leader (Shona Moore); Source to Pay (Andrew Croft (Inherited)); Source to Pay (STP) (Dennis Martin); Sourcing (Brigitte Kimpel-Koch [C]); Sourcing (Frank Liesner); Sourcing (Jens Knoch); South America Sales Ops (Jean-Claude Andr√© (Inherited)); South Korea Operations (Ji-Young Sohn); South Milwaukee 140 (Cory Toellner (Inherited)); South Milwaukee 140 (Justin N Gronbach); South Milwaukee 140 (Kevin Labriola); South Milwaukee 140 ACM Area 1 (Cassandra J Cecka); South Milwaukee 140 ACM Area 2 (Shannon T Bardega); South Milwaukee 140 QA (Amy M Gebhardt); South Portland 256 (Cory Vierck); South Portland 256 ACM Area 1 (Kendra Howard); South Portland 256 ACM Area 2 (Cameron Clement); South Portland 256 QA (Mark Anderson); Southfield 070 (Lauren Jenkins); Southfield 070 (Marida L Bivens); Southfield 070 ACM Area 1 (Lori Daratony); Southfield 070 ACM Area 2 (Linda M Walker); Southfield 070 ACM Area 3 (Tiffany A Patton); Southfield 070 QA (Tamil Pettway (On Leave)); Southfield 070 QA (Tamil Pettway); Spartanburg 179 (Darrell Brotherton); Spartanburg 179 (Jose Pineda); Spartanburg 179 ACM Area 1 (Shaqueda Cariens); Spartanburg 179 ACM Area 2 (Krysten Evans); Spartanburg 179 AMQ (Jennifer Fox); Spartanburg 179 QA (Jennifer R Fox); Spartanburg 179 QA (Jose Pineda); Spartanburg 179 QA (Vernicia Smith); Spartanburg 179 QA (Victoria McIntyre (Inherited)); Specialist Area Bus Manager775 (Lisa Stewart); Specialist Area Business Manager (Katerina Kouridakis); Specialist Area Business Manager (Natasha Hutchison (Inherited)); Specialist Area Business Manager (Steve Carroll); Specialty HAE (Debbie Bensen-Kennedy (Inherited)); Specialty HAE (Joseph Chiao); Specialty Plasma (Phyllis Bressler); Specialty Plasma (Robert P Lawler (Inherited)); Spectroscopy and Elementary Analysis (Pierre-Harald Schmidt); Spectroscopy and Elementary Analysis (Stefan Wilka); Sphinx (Klara Cela); Sphinx II (Fynn Krieger); Spokane Main 621 (Adam Allred); Spokane Main 621 (Juli McConnell); Spokane Main 621 ACM Area 1 (Maurice V R Reed); Spokane Main 621 ACM Area 2 (Janette R Williams); Spokane Main 621 QA (Andee Leigh Schock); Spokane Valley 622 (Donna L King); Spokane Valley 622 (Ryan H Rettkowski); Spokane Valley 622 ACM Area 1 (Josh Kearney); Spokane Valley 622 ACM Area 1 (Ryan H Rettkowski (Inherited)); Spokane Valley 622 ACM Area 2 (Donna L King (Inherited)); Spokane Valley 622 ACM Area 2 (Juli McConnell); Spokane Valley 622 QA (Donna L King); Spokane Valley 622 QA (Rachel R Maddox); Springdale 268 (Justin Hampton); Springdale 268 ACM Area 1 (Devona D Williams); Springdale 268 ACM Area 2 (Ellie Kordooni); Springdale 268 QA (Karina G Campos); Springfield 285 (Quawan Dhom); Springfield 285 QA (Pari Patel); Springfield 492 (Amy L Pruitt); Springfield 492 (Tyler L Robinson); Springfield 492 ACM Area 1 (Carmen Gonzalez); Springfield 492 ACM Area 1 (Peter J Gouvas); Springfield 492 ACM Area 2 (Natalie N Williams); Springfield 492 QA (Marcie B Deal); Springfield 620 (Karen Aspinwall); Springfield 620 (Karen Hebbert); Springfield 620 ACM Area 1 (Karen Hebbert (Inherited)); Springfield 620 ACM Area 1 (Lisa M Meredith); Springfield 620 ACM Area 2 (Julia A Thompson); Springfield 620 QA (Becky D Clute); Sr. Clinical Program Manager Clinical Development (Anthony Ciliberto); Sr. Clinical Program Manager ¬ñ Clinical Development (Anthony Ciliberto); Sr. Clinical Program Mgr ¬ñ Clinical Development (Anthony Ciliberto); Sr. Scientist S''visor EM Unit (Ross Hamilton); St Louis 107 (Jimmy Williamson Jr); St Louis 107 (Robert Karbach); St Louis 107 ACM Area 1 (Ashli N Pinson); St Louis 107 ACM Area 1 (Jimmy Williamson Jr (Inherited)); St Louis 107 ACM Area 2 (Ashli N Pinson); St Louis 107 ACM Area 2 (Sentoria D Leonard-Brown); St Louis 107 QA (Sharica Ausler); St Louis 132 (Tiffany D Thurman); St Louis 132 ACM Area 1 (Chris Haley); St Louis 132 ACM Area 2 (Kevin S Neidinger); St Louis 132 ACM Area 2 (Tiffany D Thurman (Inherited)); St Louis 132 QA (Abby Hill); St Louis 132 QA (Jacob P Phillips); St Paul 180 (Darin L Bargsten); St Paul 180 ACM Area 1 (Cody A Patton); St Paul 180 ACM Area 2 (Brenda L Steffen); St Paul 180 QA (Amanda Peroutka (On Leave)); St Paul 180 QA (Amanda Peroutka); St Paul 180 QA (Holt Peterson (Inherited)); St Paul 416 (Scott Cantrell); St Paul 416 QA (Diego A Bastidas); Stability (Anita Jansen de Salazar); Stability (Jessica Mackellin); Stability (Jessica Parletta); Stability (Michel Baur (Inherited)); Stability (Rossana Amodeo); Stability Trials and Retention Samples (Chris O''Meara); Starting Materials Testing & Release (Simone Lang); State College 262 (Daniel LoCasale); State College 262 ACM Area 1 (Justin Nolan); State College 262 ACM Area 1 (Maria Garlick); State College 262 ACM Area 2 (Hunter Millward); State College QA 262 (TARA STYERS); State Government Affairs & Eastern Reg. (Karla White); Statistics & Data Management (Wilfried Meyers); Stellv. Center Manager (Andreas Gehrich (Inherited)); Stellv. Center Manager (Andreas Gehrich); Stellv. Center Manager (Annette Pernitzsch (Inherited)); Stellv. Center Manager (Annette Pernitzsch); Stellv. Center Manager (Claudia Habenicht (Inherited)); Stellv. Center Manager (Claudia Habenicht); Stellv. Center Manager (Damaris Kieckhfer (Inherited)); Stellv. Center Manager (Damaris Kieckh√∂fer (Inherited)); Stellv. Center Manager (Heike Borchert (Inherited)); Stellv. Center Manager (Kirsten Scheibel (Inherited)); Stellv. Center Manager (Kirsten Scheibel); Stellv. Center Manager (Natascha Bock (Inherited)); Stellv. Center Manager (Natascha Tappendorf); Stellv. Center Manager (Stephani Keltsch); Stellv. Center Manager (Sven Schuhmann (Inherited)); Stellv. Center Manager (Sven Schuhmann); Stellvertretender Labormanager (Astrid Mather (Inherited)); Sterile Filling AlbuRx (Hai Tran); Sterile Filling AlbuRx (Jennifer Tang); Sterile Filling AlbuRx (Mason Briner (Inherited)); Sterile Filling AlbuRx (Mason Briner); Sterile Filling AlbuRx (Matthew Donegan); Sterile Filling AlbuRx (Nina Djordjevich); Sterile Filling AlbuRx (Paolo Robillos); Sterile Filtration (Jakob Locher); Sterility (Anja Djordjevich); Sterility (Denise Vella (Inherited)); Sterility (Johanna Mock); Sterility (Nicole Magno); Sterility (Sabrina Desiree Sann); Sterility Assurance Monitoring & Trending (Marika Moore); Sterility Assurance (Barbara Moser); Sterility Assurance (Boo Pit Tan); Sterility Assurance (Craig Stephens (Inherited)); Sterility Assurance (Darla Erman); Sterility Assurance (Jessica Kay); Sterility Assurance (Meena Shakaib); Sterility Assurance (Peter Major); Sterility Assurance (Richard Hughes); Sterility Assurance (Robert O''Malley); Sterility Assurance (Tyson Parker); Sterility Assurance ¬ñ Monitoring & Trending (Marika Moore); Sterling Heights 164 (Kayla J Allen); Sterling Heights 164 (Shauna Douglas); Sterling Heights 164 ACM Area 1 (Zack Hyso); Sterling Heights 164 ACM Area 2 (Shauna Douglas (Inherited)); Sterling Heights 164 ACM Area 2 (Shauna Douglas); Sterling Heights 164 QA (Elijah J Wilson); Sterling Heights 164 QA (JoJo Sobjack); Stone Mountain 119 (Antonia Geiselmayr); Stone Mountain 119 (William A Voltz); Stone Mountain 119 ACM Area 1 (Milaine Clairvil); Stone Mountain 119 ACM Area 2 (Derrick Barnes); Stone Mountain 119 QA (Marketa D Goodwin (On Leave)); Stone Mountain 119 QA (Marketa D Goodwin); Storage-Virtualization-DR (Ali Bakhtiar); Storage/Virtualization/DR (Ali Bakhtiar); Strat Project Portfolio & Op Excellence (Michael Schrder (Inherited)); Strategic Analytics & Pricing (Paul Jens); Strategic Analytics (Manish Srivastava); Strategic Expansion Projects (Robyn Elliott); Strategic Indirect Sourcing (David Pauli); Strategic Initiatives (Matt Shapiro); Strategic Initiatives ENG (Dilip I Raval); Strategic Initiatives ENG (Gene Bohn); Strategic Project Portfolio and Operational Excellence (Gil Rochat); Strategic Project Portfolio and Operational Excellence (Martin Schaeren (Inherited)); Strategic Sourcing (Benjamin Fruin); Strategic Sourcing Capex & MRO Sourcing (Jos Maldonado); Strategic Sourcing Capex & MRO Sourcing (Jos√© Maldonado); Strategic Sourcing Capex & MRO Sourcing (Paul Addis (Inherited)); Strategic Sourcing Capex/MRO MBG (Bernd Mhling); Strategic Sourcing Capex/MRO MBG (Bernd M√ºhling); Strategic Sourcing Direct (Martin Grossmann); Strategic Sourcing Direct Packaging, Devices, Containers, Closures, R&D (Benjamin Fruin); Strategy & Business Development (Alan Wills (Inherited)); Strategy & Business Development (Alan Wills); Strategy & Business Development (Andrea Douglas); Strategy & Business Development (Bev Menner); Strategy & Business Development 2 (Stephanie Read); Strategy & Innovation (Ken Lim); Studium Plus (Carmen Walldorf (Inherited)); Studium Plus (Doris Nake (Inherited)); Study File Management (Elizabeth Petersen); Study Operations (3) (William Karich); Study Operations (Christa Lewiski); Study Operations (Janis Witzleb); Study Operations (Lyndah Oswald - Okebata); Superior PTH Vorbehandlung 3 / Abfllung 3 H069 (Adam Krajewski); Superior PTH Vorbehandlung 3 / Abfllung 3 H069 (Frank Gerhard Grger); Superior PTH Vorbehandlung 3 / Abf√ºllung 3 H069 (Adam Krajewski); Superior PTH Vorbehandlung 3 / Abf√ºllung 3 H069 (Frank Gerhard Gr√∂ger); Superior PTH Vorbehandlung 3 / Abf√ºllung 3 H069 (Sylvia Kauf); Supervisor (Andreas Gehrich (Inherited)); Supervisor (Andreas Gehrich); Supervisor (Annette Pernitzsch (Inherited)); Supervisor (Annette Pernitzsch); Supervisor (Claudia Habenicht (Inherited)); Supervisor (Claudia Habenicht); Supervisor (Damaris Kieckhfer (Inherited)); Supervisor (Damaris Kieckh√∂fer (Inherited)); Supervisor (Heike Borchert (Inherited)); Supervisor (Kirsten Scheibel (Inherited)); Supervisor (Kirsten Scheibel); Supervisor (Natascha Bock (Inherited)); Supervisor (Natascha Tappendorf); Supervisor (Stephani Keltsch); Supervisor (Sven Schuhmann (Inherited)); Supervisor (Sven Schuhmann); Supplier Management (Bill Chambers); Supplier Management (Ivo Kreyenbuehl); Supplier Quality Management (Allen F Coleman); Supplier Quality Management (Justin K Zajc); Supplies, Liverpool (Stephen Magill [C]); Supplies, Liverpool (William Helsby); Supply & Logistics (Avril Lam); Supply & Logistics (Winnie Yau); Supply Chain (Anita Erber); Supply Chain (Boris Kaiser); Supply Chain (Rick Gibson); Supply Chain Business Process (Wolfgang Schneider); Supply Chain External Manufacturing (Stuart Summers); Supply Chain Finance (Kiran Duhra); Supply Chain Liverpool (James Monaghan); Supply Chain Maidenhead (Ian Dick); Supply Chain Management (Cameron Barrett); Supply Chain Management (Michael F Deem); Supply Chain Management (Ryoichi Imamura); Supply Chain Mgt & Operational Planning (Robert P Lawler); Supply Chain Mgt (Mischa Moeckli); Supply Chain Planning & Inventory Management (Kevin L Robards); Supply Chain Planning (Cheryll McLeod); Supply Chain Planning (David McClure); Supply Chain Planning (Ratana Lim); Supply Chain Planning (Serge Marques); Supply Chain Planning (Sharon Gough); Supply Chain Planning (Unni Nair); Supply Chain QA (Andrew Norman); Supply Chain Services (Dennis Finger); Supply Chain Services (Grant Gaddis); Supply Chain Services (Kelly L Konemann (Inherited)); Supply Chain Services (Kelly L Konemann (On Leave)); Supply Chain Services (Kelly L Konemann); Supply Chain Services (Maike Pollaschek); Supply Chain Services (Tamara Huber); Supply Chain Systems (Sean Flannery); Supply Chain, PKV (Lachlan Cruise); Support & Hygiene Produktion (Monika Krebs); Support & Nebenanlagen (Florian Damm); Support (Arnd Vollmerhausen (Inherited)); Support (Benjamin Grn); Support (Benjamin Gr√ºn); Support (Bernd Zimmermann); Support (Heiko Jucknat); Support und Admin Medical Department (Martina Witzer); Sustain and Improve PTI Americas (Austin Newsom); Syracuse 196 (SILVIO VONA); Syracuse 196 ACM Area 1 (Kristina Deonarine); Syracuse 196 ACM Area 2 (Timothy Ray); Syracuse 196 QA (Matthew McHale); System Support (Magan Lai); System Validation and Implementation (Marquita Moore); TA Coag, Critical Care & Cardiovascular (Susan Welsh (Inherited)); TA Coagulation & Acquired Bleeding, Global Clinical R&D (Andres Brainsky); TA Development PM Group (Joanne Uhl); TA Immunology (Susan Welsh (Inherited)); TA Support (Anjani Advani); TDD (Technical Development & Documentation) (Patrick Gregory); TEC-Testentwicklung Chemie (Kerstin Nske); TEC-Testentwicklung Chemie (Partho Halder); TRICC (William Mezzanotte (Inherited)); TRICC - Therapeutic Area II (Marc Uknis); TRICC II (Mikhail Rojavin); TRICC II (Orell Mielke); TRICC III (Iris Jacobs); TRICC III (Maria Gasior); TRICC Therapeutic Area (Mittie Doyle); Talent Acquisition (Daphne Wong); Talent Acquisition (Ivan Dokoza); Talent Acquisition (James Telfer (Inherited)); Talent Acquisition (Priya Dinkar); Talent Acquisition - APAC (James Telfer); Talent Acquisition - APAC (Lisa Edwards); Talent Acquisition - Americas (Andrew Lewis); Talent Acquisition - EMEA (Elena Kharlamova); Talent Acquisition - Europe (Peggy Klein); Talent Acquisition - Plasma (Tracey Lambalot); Talent Acquisition - Plasma (Tracey Lambalot) (Tracey Lambalot); Talent Acquisition AUS (Angela Bellenger); Talent Acquisition and Talent Management (Beth Thomas); Talent Development (APAC) (Kathy Sacca); Talent Development (Eveline Wuethrich); Talent Development Apprenticeship (Anja K√§ser); Talent Development North America (Ll''Rae Robinson); Talent Management & Acquisition (Brian Fehrer); Talent Management & Acquisition (Elizabeth Walker (Inherited)); Talent Management AU (Raechel Gray); Talent Programs & Analytics (Brian Fehrer (Inherited)); Talent Programs & Analytics (Mary Schnackenberg); Talent Programs & Analytics (Sarah Peacey); Tallahassee 211 (Andria Logan); Tallahassee 211 QA (Lori Carlson (Inherited)); Tallahassee 211 (Andria Logan); Tallahassee 211 ACM Area 1 (Andria Logan (Inherited)); Tallahassee 211 ACM Area 1 (Brooklyn Williams (On Leave)); Tallahassee 211 ACM Area 2 (Brenda B Williams); Tallahassee 211 ACM Area 2 (Michelle Davenport); Tallahassee 211 QA (Lori Carlson (Inherited)); Tallahassee 211 QA (Mechelle Robinson); Tallahassee 211 QA (Mychal A Reynolds); Tampa 109 (Elizabeth Lam); Tampa 109 (Michelle K Natalie); Tampa 109 ACM Area 1 (Leah J Davis); Tampa 109 ACM Area 2 (Amber S Goodwine); Tampa 109 ACM Area 2 (Carolyna Perez); Tampa 109 QA (Joseph Rivera (On Leave)); Tampa 109 QA (Joseph Rivera); Tampa 109 QA (Michelle K Natalie); Tax (James Smith); Tax Compliance (Mark Murtaugh); Taylor 240 (Joe Korea); Taylor 240 ACM Area 1 (Joe Korea (Inherited)); Taylor 240 ACM Area 1 (Nicki Nguyen); Taylor 240 ACM Area 2 (Dion Dippel); Taylor 240 ACM Area 2 (Joe Korea (Inherited)); Taylor 240 QA (Wendy MacConnell); Team 1 (Christian Schubert); Team 1 (Jrg Dennis Issel); Team 1 (J√∂rg Dennis Issel); Team 1 (Michael Welsch (Inherited)); Team 1 (Veronika Chernov); Team 10 Verpackung (Petra Eversberg); Team 10 Verpackung (Petra Sch√§fer (On Leave)); Team 10 Verpackung (Petra Sch√§fer); Team 10 Verpackung (Rosemarie Rdding); Team 2 (Aytac Akin); Team 2 (Michael Welsch (Inherited)); Team 2 (Silke Oppermann); Team 3 (Michael Welsch (Inherited)); Team 3 (Thomas Grhning); Team 3 (Thomas Gr√§hning); Team 3 (Waldemar Kliwer); Team 4 (Erwin Gordzielik); Team 4 (Michael Welsch (Inherited)); Team 5 (Ludwig Heckmann); Team 5 (Michael Welsch); Team 6 (Karl-Hermann Sprenger); Team 7 (Pavlina Weninger); Team 7 (Thomas Fieber); Team 8 (Andreas Rastschewski); Team 8 (Mara Saglam); Team 8 (Melvin Scruggs); Team 9 (Eugen Rogosin); Team 9 (Igor Kaucher); Team Buffer Preparation (Dirk Michel); Team DSP I (Heiko Jucknat); Team HVAC (Michael Hillmann); Team Kalibrierung (Thomas Kniepper); Team Leader - Imp & Compl (Kathy Theodorakis); Team Leader - AFF/ZN 444 (Chas Chalker); Team Leader - AFF/ZN 444 (Remon Hemaya); Team Leader - DS 444 (Hieu Tran); Team Leader - Imp & Compl (Kathy Theodorakis); Team Leader - Inac 444 (Margarita Mejia); Team Leader - Packaging - 451 (Anthony Lane); Team Leader - Packaging - 451 (Anthony Lane); Team Leader - Plnt & Srv 444 (Darren McKean); Team Leader - QC Microbiology (Kerry Lincoln); Team Leader - Sterility Assurance (Jon Wong); Team Leader - Validation (Kylie Prendergast); Team Leader Animal Services (Anne Hageman); Team Leader Change Mgmt - Prod (Marcus O''Dwyer); Team Leader Change Mgmt - Prod (Paul Williams); Team Leader Formulation B 454 (David Moulsdale); Team Leader I PHAD I (Tobias Heck); Team Leader II PHAD I (Patric Sallin); Team Leader Prod Support - DS (Jeffrey Gan); Team Leader Prod Support - DS (Jeffrey Spicer); Team Leader Prod Support - MD (Jeffrey Gan); Team Leader Prod Support - MD (Stuart Jones); Team Leader Production Support (Denise Bertram); Team Leader Production Support (Elaine Feely (Inherited)); Team Leader Upstream-Harv 444 (Ibrahim Ozerim); Team Leader Upstream-Inoc 444 (Craig Byham); Team Mechanik (Christoph Freiling); Team Mechanik (Gerd Pendzialek); Team PBF (Thorsten May); Team PBF 1.0 (Maikel Bamberger); Team PTE (Stefan Rees); Team Purification I (Carsten Meyer (Inherited)); Team Purification I (Heiko Jucknat (On Leave)); Team Purification I (Heiko Jucknat); Team Purification II (Selcuk Ayan); Tech Dev Ops QA (Monica Rose); Tech Support Potency Testing (Julia Hainbach); Tech Support Potency Testing (Reinhard Paul); Tech Transfer (Samantha Gakias); Tech Transfer D820 (Ming Chong); Tech Transfer Projekt Management Team (Nina Walser); Technical Development (Lynda Allan); Technical Learning & Development (David P Monte); Technical Learning & Development 1 (Amy Jackson); Technical Learning & Development 2 (Ann Lescher); Technical Operations (Fuad Haddadin); Technical Operations (Michele Himmelspach (Inherited)); Technical Operations - Investigations (Tino Boss); Technical Operations - Small Scale (Janine Bash); Technical Operations I (Daniel Knack); Technical Operations I (Veronica Lopez); Technical Operations II (Becca Huebsch); Technical Operations II (Raghav Oberoi); Technical Operations III (Inga Breitwieser); Technical Operations III (Wilfried Wormsb√§cher); Technical Operations IIa (Jan Schwichtenberg); Technical Operations IV (Katrin Maria Sander); Technical Projects (Wendy Turner); Technical Services (Juerg Clavadetscher); Technical Services, Influenza Operations (Bill Cracknell); Technical Services/FM (Beat Meyer); Technikteam Labore (Stephan Lw); Technikteam Labore (Stephan L√∂w); Technischer Service (Lothar Klingelhfer); Technischer Service (Lothar Klingelh√∂fer); Technology Transfer (Jesse Richter); Teilbereichsleiter Abfllung (Stefan Peil); Teilbereichsleiter Abf√ºllung (Stefan Peil); Tempe 048 (Terry M Young); Tempe 048 ACM Area 1 (Samuel V Grijalva); Tempe 048 ACM Area 1 (Trina L Bryant); Tempe 048 ACM Area 2 (Sonya L Nigh); Tempe 048 QA (John Son); Tempe 048 QA (Melissa M Martinez); Tempe 427 (Daniel I Villegas (Inherited)); Tempe 427 (Patrick S Taylor); Tempe 427 ACM Area 1 (Kelly L Ortega); Tempe 427 ACM Area 2 (Jennifer Valenciano); Tempe 427 QA (Daniel I Villegas); Tempe 427 QA (Kellie N Buecker); Tempe 427 QA (Tiffanie Contreras); Temple 260 (Kimm Klisiewicz); Temple 260 ACM Area 1 (Moses Olukere); Temple 260 ACM Area 1 (Sarah Gaines); Temple 260 ACM Area 2 (Michael Martinez); Temple 260 QA (Cayley Eppler); Temple 260 QA (Kellie N Buecker); Temple Terrace 252 (Stephanie Frye); Temple Terrace 252 ACM Area 1 (Michelle Briseno); Temple Terrace 252 ACM Area 1 (Monica Miller); Temple Terrace 252 ACM Area 2 (Janette J Pierre); Temple Terrace 252 QA (Caitlin Shoemaker); Temple Terrace 252 QA (Joel Gallegos); Terre Haute 265 (Daniella Miller); Terre Haute 265 (Tara Goebel); Terre Haute 265 ACM Area 1 (Tara Goebel); Terre Haute 265 ACM Area 2 (Tracy Robinson); Terre Haute QA 265 (Sherri A Suttles); Testing Laboratory (Maruthi Shivananda); Therapeutic Area Clinical Ops (Bruce Wynne); Therapeutic Area Clinical Ops I&N (Ann-Marie Hulstine); Therapeutic Area Critical Care (Hartmut Landgrebe); Therapeutic Area Medical Evaluation (Nataliya Doliba); Therapeutic Area Medical Evaluation 1 (Nataliya Doliba); Therapeutic Area Medical Evaluation Lead (Kaniez Baig); Tokyo Yamanashi Area (Yoshifumi Umenaka); Toledo 175 (Steve Sparks); Toledo 175 ACM Area 1 (Kevin Connelly); Toledo 175 ACM Area 2 (James Carroll); Toledo 175 QA (Aarsalaan Semna); Toledo 175 QA (April Tyler); Toledo 223 (Debra Purney); Toledo 223 ACM Area 1 (Jeffery Eagle); Toledo 223 ACM Area 2 (Debra Purney); Toledo 223 ACM Area 2 (Heather Marshall); Toledo 223 QA (Christopher Travalik (Inherited)); Toledo 223 QA (Michael Craun); Toledo 223 QA (Pam Perryman); Toll IG/Alb Bulk (Ali Hashempour); Toll IG/Alb Bulk (Andrew Vasil); Toll IG/Alb Bulk (Anthony Manovella (Inherited)); Toll IG/Alb Bulk (Edward Camilleri); Toll IG/Alb Bulk (Jason Gilmour); Toll IG/Alb Bulk (Johnny Barbis); Toll IG/Alb Bulk (Jon Gummer); Toll IG/Alb Bulk (Kevin deSouza); Toll IG/Alb Bulk (Michael Appelman); Toll IG/Alb Bulk (Ricardo Morales); Toll IG/Alb Bulk (Robert Poletti); Toll IG/Alb Bulk (Rodney Vermeend); Toll IG/Alb Bulk (Shannon Thorp); Toll IG/Alb Bulk (Tom Koukouvaos); Toll Manufacturing BU Team (CLAUDIO BAEZZATO); Toll Manufacturing BU Team (Maria Gabriella Patrassi); Toll Mfg. Excipients & Intermediates (Jennifer Dolores Brenner); Toll Mfg. Excipients & Intermediates (Markus Staempfli (Inherited)); Toll Mfg. Excipients & Intermediates (Niklaus Kraehenbuehl); Toll VI and Pack (Parth Soni); Total Rewards (Figen Zaim); Tox Operations (Andrea Beyerle); Toxicology (Christopher John Peters); Toxicology Unit (Gerald Hbarth); Toxicology Unit (Gerald H√∂barth); Toxicology Unit 1 (Barbara Dietrich); Trademark (Antje Michel (Inherited)); Trademark (Janine Colesie (On Leave)); Training & Development Office (Chiho Muto); Training & Development Office (Shinji Ohkura); Training & GMP (Barbara Kalina (Inherited)); Training & GMP (Wilfried Happel (Inherited)); Training and Document Control (Lixia He ); Training and Document Control (Lixia He ?????); Transformation Change Management (Emily Riggs); Translational Biology (Alexander Schaub); Translational Biology 1 (Sandra Wymann); Translational Biology 2 (Svetlana Diditchenko); Translational Biology 2a (Alexei Navdaev); Translational Biology 3 (Anna Schnell); Translational Safety (Ashlyn Bassiri); Translational Science (Nick Wilson); Translational Science 1 (Nick Wilson); Transplant (Kevin Kovaleski); Transplant Marketing (Paula Manchester); Transplant Marketing SOT (Jeanne Andronowitz); Transplant Marketing SOT (Paula Manchester (Inherited)); Transplant Medical Affairs (Kevin Kovaleski (Inherited)); Transplant Medicine (Kevin Kovaleski); Transplant TA PM Group (Linda Cortese); Transport und Prozess Management (Andre Husse); Transport und Prozess Management (Anna-Lena Niederh√∂fer); Transportation Management (Gnter Vollmer); Transportation Management (G√ºnter Vollmer); Treasury Europe (Dieter Engstfeld); Trending (Marika Moore); Trending ¬ñ Sterility Assurance (Vijay Dundigalla); Trial & Quality Systems (Sean Storms); Tucson 071 (April Behnke); Tucson 071 (Moses Falaiye); Tucson 071 ACM Area 1 (Alma Y Olivera); Tucson 071 ACM Area 2 (Luz D Almeraz); Tucson 071 QA (Cori J Collins (Inherited)); Tucson 071 QA (Nicole A Downey); Tucson 111 (Alejandro Angulo); Tucson 111 ACM Area 1 (Alejandro Angulo (Inherited)); Tucson 111 ACM Area 1 (George Adams); Tucson 111 ACM Area 2 (Kendra N Belcher); Tucson 111 QA (Dulce A Jimenez); Tucson 111 QA (Eugene Y Shem); Tucson 624 (Jovanna R Ortega); Tucson 624 ACM Area 1 (Sara M Portugal); Tucson 624 ACM Area 2 (Adrian Soto); Tucson 624 QA (Bernadette B Woodson); Tulsa 014 (Heather Colbert); Tulsa 014 (Jerry Ewen); Tulsa 014 ACM Area 1 (Reggie De Quiroz); Tulsa 014 ACM Area 2 (Forrest Burns); Tulsa 014 ACM Area 2 (Heather Colbert); Tulsa 014 QA (Cooper Cruson); Tulsa 014 QA (Heather Colbert); Tulsa 417 (Jerry Ewen); Tulsa 417 (Troy Lee Wheeler); Tulsa 417 ACM Area 1 (Nina Linga); Tulsa 417 ACM Area 2 (Lindsay K Jameson); Tulsa 417 QA (Cooper Cruson); Tulsa 417 QA (Hannah E Todroff); Tulsa 417 QA (Julie L Potter); Tulsa 417 QA (Troy Lee Wheeler (Inherited)); Turkey Field Sales (Filinta Cakar); Tyler 520 (Stephanie D Shah); U of M 414 ACM Area 1 (Abubeker M Osman); U of M 414 ACM Area 2 (Ahmed N Ismail); U&S Process Engineering (Rodrigo Campos); UK accounting (Lorraine Lambert); US ComOps Immunology Sales (Joseph Guinan); US Credit and Treasury (Angela Caivano); US Distribution (Daniel Krysztofiak); US Distribution (Joseph Jefferson); US Federal Tax Compliance (Giovanni Siciliano); US Finance: Capital (Andrea Burch); US Healthcare Systems (Pete Dickson); US Healthcare Systems (Richard Dudek); US Lab Quality Assurance (Alecia C Harshaw); US Marketing (Bernadine Koziara); US Med Affairs-Coagulation-Field (Julie Farley); US Med Affairs-Coagulation-Field (Katheleen Pinto); US Med Affairs-Immunology-Field (Elyse Murphy); US Med Affairs-Specialty-Field (Ayman Kafal); US Med Affairs-Specialty-Field (Ben Boccuzzi); US PLC Quality Assurance (Brian H. Frye); US PLC Quality Assurance (Carol Kralicek); US PLC Quality Assurance (Jeff Dalton Jr); US PLC Quality Assurance (Keith Winiger); US PLC Quality Assurance II (Jeff Dalton Jr); US Plasma Marketing (Keith A Haywood); US Plasma Marketing (Scott Newkirk (Inherited)); US Plasma Operations (Daniel V Ferris); US Plasma Operations Division 1 (Scott Newkirk); US Plasma Operations Division 2 (Michelle A Meyer); US Plasma Operations Division 3 (Wlenyeno J Elliott-Browne); US Plasma Operations Region 11 (Joshua D Williamson); US Plasma Operations Region 11.1 (Holt A Peterson); US Plasma Operations Region 11.1 (Holt Peterson); US Plasma Operations Region 11.2 (Aaron C White); US Plasma Operations Region 11.3 (Brandon S Bridges); US Plasma Operations Region 11.4 (Christine Thomas); US Plasma Operations Region 11.5 (Philip Nixon); US Plasma Ops Region 1 (Dianne Sorenson); US Plasma Ops Region 1.1 (Paul Warden); US Plasma Ops Region 1.2 (David M Wilson); US Plasma Ops Region 1.2 (Marc D Fisher); US Plasma Ops Region 1.3 (Cori J Collins); US Plasma Ops Region 1.4 (Daniel I Villegas); US Plasma Ops Region 1.5 (Timothy Freeland Jr); US Plasma Ops Region 10 (Carmon Kieffer); US Plasma Ops Region 10 (Michelle A Meyer (Inherited)); US Plasma Ops Region 10 (Rebecca Swingle); US Plasma Ops Region 10.1 (Bonnie M Talbott (On Leave)); US Plasma Ops Region 10.1 (Bonnie M Talbott); US Plasma Ops Region 10.1 (Christopher Travalik); US Plasma Ops Region 10.1 (Derek Erhart); US Plasma Ops Region 10.2 (Mary A Paul); US Plasma Ops Region 10.2 (Michael W Solomon); US Plasma Ops Region 10.3 (Neville L Bain); US Plasma Ops Region 10.3 (Stephanie D Shah); US Plasma Ops Region 10.4 (Brendi L Cantrell); US Plasma Ops Region 10.4 (Brett A Wintheiser); US Plasma Ops Region 10.4 (Lori Carlson); US Plasma Ops Region 10.4 (Nicole M Loncon); US Plasma Ops Region 10.5 (Melodee C Ebel); US Plasma Ops Region 11 (Joshua D Williamson); US Plasma Ops Region 12 (Brandon L Voege); US Plasma Ops Region 12.1 (Melodee C Ebel); US Plasma Ops Region 12.2 (Kyle M Lehrke); US Plasma Ops Region 12.3 (Alan Maldonado); US Plasma Ops Region 12.4 (Kashaun Muhammad); US Plasma Ops Region 12.4 (Tiffany D Sherman); US Plasma Ops Region 12.5 (Lori Carlson); US Plasma Ops Region 2 (Michael S Beam); US Plasma Ops Region 2.1 (Jose L Dela Garza); US Plasma Ops Region 2.1 (Vida C Chapman); US Plasma Ops Region 2.2 (Daniel Venn); US Plasma Ops Region 2.2 (Sheri Mixon); US Plasma Ops Region 2.3 (Brenda C Greenfield); US Plasma Ops Region 2.3 (Vida C Chapman); US Plasma Ops Region 2.5 (Kandra K Blodgett); US Plasma Ops Region 2.5 (Patrick Garza); US Plasma Ops Region 3 (Angela S Drumright); US Plasma Ops Region 3.1 (Latosha Y Floyd); US Plasma Ops Region 3.2 (Angela S Drumright (Inherited)); US Plasma Ops Region 3.2 (Joshua D Williamson); US Plasma Ops Region 3.2 (Lauren Jenkins); US Plasma Ops Region 3.2 (Marc D Fisher); US Plasma Ops Region 3.3 (Drewleigha B Sarver); US Plasma Ops Region 3.3 (Keith Clemons); US Plasma Ops Region 3.4 (Ashley M Jamieson); US Plasma Ops Region 3.5 (Erik Plate); US Plasma Ops Region 4 (Brannon L Brittain); US Plasma Ops Region 4.1 (Cole D Kimple); US Plasma Ops Region 4.1 (Holt A Peterson); US Plasma Ops Region 4.1 (Tina Wagenknecht); US Plasma Ops Region 4.2 (Jamie E Lawrence); US Plasma Ops Region 4.2 (Troy Lee Wheeler); US Plasma Ops Region 4.3 (Cole D Kimple); US Plasma Ops Region 4.3 (Cory Toellner); US Plasma Ops Region 4.4 (Jesus A Castillo); US Plasma Ops Region 4.5 (Jamie E Lawrence); US Plasma Ops Region 5 (Rhonda C Harp); US Plasma Ops Region 5.1 (Aaron C White); US Plasma Ops Region 5.1 (Keith Clemons); US Plasma Ops Region 5.2 (Brandon S Bridges); US Plasma Ops Region 5.2 (Patti Bailey, Prim J Cunningham); US Plasma Ops Region 5.2 (Prim J Cunningham); US Plasma Ops Region 5.3 (Nicole M Adams); US Plasma Ops Region 5.3 (Patti Bailey); US Plasma Ops Region 5.3 (Rhonda C Harp (Inherited)); US Plasma Ops Region 5.4 (John L Thixton); US Plasma Ops Region 5.5 (Michele Purvines-Honzo); US Plasma Ops Region 6 (Darrel W Simon); US Plasma Ops Region 6.1 (John E Hunt); US Plasma Ops Region 6.1 (Tiffany D Sherman); US Plasma Ops Region 6.2 (Kyle M Lehrke); US Plasma Ops Region 6.2 (Sam Schultz); US Plasma Ops Region 6.3 (Alan Maldonado); US Plasma Ops Region 6.3 (Jose L Dela Garza); US Plasma Ops Region 6.4 (John E Hunt); US Plasma Ops Region 6.4 (Sheri Mixon); US Plasma Ops Region 6.5 (Victoria McIntyre); US Plasma Ops Region 7 (Brandon L Voege); US Plasma Ops Region 7 (Brendi L Cantrell (On Leave)); US Plasma Ops Region 7 (Brendi L Cantrell); US Plasma Ops Region 7.1 (Lori Carlson); US Plasma Ops Region 7.1 (Nicole M Loncon); US Plasma Ops Region 7.1 (Stephanie D Shah); US Plasma Ops Region 7.2 (Christine Thomas); US Plasma Ops Region 7.2 (Christopher Travalik); US Plasma Ops Region 7.3 (Ron Griffin); US Plasma Ops Region 7.4 (Andrew Brammer); US Plasma Ops Region 7.4 (Brendi L Cantrell (On Leave) (Inherited)); US Plasma Ops Region 7.4 (Brett A Wintheiser); US Plasma Ops Region 7.4 (Drewleigha B Sarver); US Plasma Ops Region 7.5 (Christopher Travalik); US Plasma Ops Region 7.5 (Mary A Paul); US Plasma Ops Region 8 (Tammy S Harrison); US Plasma Ops Region 8.1 (David Ensminger); US Plasma Ops Region 8.1 (Derek Erhart); US Plasma Ops Region 8.1 (Matthew Smith); US Plasma Ops Region 8.2 (Ben Samarripas); US Plasma Ops Region 8.2 (Stephanie D Shah); US Plasma Ops Region 8.3 (Andrew Brammer); US Plasma Ops Region 8.3 (Greg McClain); US Plasma Ops Region 8.3 (Neville L Bain); US Plasma Ops Region 8.4 (Derek Erhart); US Plasma Ops Region 8.4 (Michael W Solomon); US Plasma Ops Region 8.4 (Tammy S Harrison (Inherited)); US Plasma Ops Region 8.5 (Derek Erhart); US Plasma Ops Region 8.5 (Patrick Willingham); US Plasma Ops Region 9 (Amanda L Kitchen); US Plasma Region 2.4 (Michael S Beam (Inherited)); US Plasma Region 2.4 (Rosa E Mercado); US Regulatory Affairs (John Hill); US Regulatory Affairs (Kevin Darryl White); US Regulatory Affairs Critical Care/Cardiovascular (Angela D Azzara); US Regulatory Affairs II (Baldevsinh Rana); US Regulatory Affairs III (Paula Clark (On Leave)); US Regulatory Affairs III (Paula Clark); US Sales (Robert Murphy); US State Tax Compliance (Tulasi Veeramachaneni); US Tax Compliance (Eric Lorah); US Tax Compliance (Peter Larsen (Inherited)); USP Engineering (Patrick Rollier); USP Laboratories (Sandra Grunske); USP Manufacturing 1 (Marc Dick); USP Manufacturing 2 (Philipp Steiner); USP Process Technology (Niklas Zink); Umwelt, Plasmabetreuung und Fremdfirmenmanagement (Bjrn Wiesner); Umwelt, Plasmabetreuung und Fremdfirmenmanagement (Bj√∂rn Wiesner); University Relations (Jasmin Senior); Unpaid Diverse (Andreas Gehrich (Inherited)); Unpaid Diverse (Andreas Gehrich); Unpaid Diverse (Annette Pernitzsch (Inherited)); Unpaid Diverse (Annette Pernitzsch); Unpaid Diverse (Claudia Habenicht (Inherited)); Unpaid Diverse (Claudia Habenicht); Unpaid Diverse (Frank Bernert (Inherited)); Unpaid Diverse (Heike Borchert (Inherited)); Unpaid Diverse (Natascha Bock (Inherited)); Unpaid Diverse (Natascha Tappendorf); Unpaid Diverse (Stephani Keltsch); Unpaid Diverse (Sven Schuhmann (Inherited)); Unpaid Diverse (Sven Schuhmann); Upstream Days (Rebecca Briers); Upstream Development (Hans-Wilhelm Beltz); Upstream Development (Stefan Debus); Upstream Manufacturing (Vicky Reading); Upstream Manufacturing - Days (John Meaney); Upstream Shift A (Edward Goulding); Upstream Shift A (Mark Harrop); Upstream Shift B (Mark Harrop); Upstream Shift B (Raymond Brownless); Utilities & Services Engineering (Paul Russell); Utilities & Services Engineering (Peter White); Utilities & Services Engineering Manager (Peter White); Utilities (Kay von Burg); Utilities Critical Systems (Michael D Proctor); Utilities Lengnau (Nozar Basseri); Utilities-Critical Systems (Jeff J Parks); Utilities-Critical Systems (Jim Meils); Utilities-Motive Power (David G Mollema); VAL - Rekombinante Proteine (Kerstin Nau); VAL - Rekombinante Proteine (Verena Koch-Geller); VAL F VIII, IgG & Albumin & Inhibitors (Antje R√∂der); VAL F VIII, IgG & Albumin & Inhibitors (Marco Donges); VAL LIMS-Beauftragte (Eckhard Sch√ºler (Inherited)); VAL Lyophilisierung (Judith Mller); VAL Lyophilisierung (Judith M√ºller); VAL Media Fills & Mikrobiologie (Elke Zameitat); VAL Wissensch. Dokumentation (Eckhard Schler (Inherited)); VAL Wissensch. Dokumentation (Eckhard Sch√ºler (Inherited)); VAL Wundheilung & Intensive Care (Karlheinz Enssle); VAL Wundheilung & Intensive Care (Markus Hilberg); VP Operations 400 (Chris Larkins); VP, Com Operation (Lorna Meldrum); VV-Virus Validation (Wolfram Schfer); VV-Virus Validation (Wolfram Sch√§fer); VV-Virus Validation 1 (Tobias Schrder); VV-Virus Validation 1 (Tobias Schr√§der); VV-Virus Validation 2 (Michaela Gerlach); VV-Virus Validation 3 (Ramona Stau); VV-Virus Validation 3 (Ramona Stau√ü); Validation (Chad Kalia); Validation (Christian Nemeth); Validation (David Turner); Validation (Debra Fisher); Validation (Eckhard Schler); Validation (Eckhard Sch√ºler); Validation (Kah Wen Lee); Validation (Linda Garrett); Validation (Maria Arulruban); Validation (Michel Baur); Validation (Michelle Johnson); Validation (NICHOLA THOMSON); Validation (Ryan Dexter); Validation (Tiffany Korth); Validation I (Chantal Pfaffen); Validation I (Michel Baur (Inherited)); Validation I (Philipp Gersbach); Validation II (Ulrike Hartmann); Validation Process (Peter Tyler); Validation Process (Rebecca Gannon); Validation Process (Russell James Ciliento (Inherited)); Validation QA 1 (Tiffany Korth); Validation QA 2 (Debra Fisher); Validation QA 3 (Linda Garrett); Validation Qualification (Chad Kalia); Validation Specialist 3 Third Party Support (Bhavyesh Pandya); Validation and Stability (Jolyn Hu ?????); Validation and Stability (Xianfeng Guo ); Value Stream - Drug Substance (Barbara Beugger); Vancouver 102 (Larry A Barttelt); Vancouver 102 ACM Area 1 (David B Hammersley); Vancouver 102 ACM Area 2 (Clarissa Halsey); Vancouver 102 QA (Greg R Edge); Viral Vector Bioanalytics (Monica Terrao); Virology & Cell Line Up (Charles McGee); Virus Assay Development and Production (Ben Dickerman); Virus Seed MFG (Adam Kotsubka); Virus Validation (Randel Fang (Inherited)); Virus Validation (Tao Zheng); Virus Validation I (Thomas Nowak); Virus Validation II / Prion Eval. (Wolfram Sch√§fer); Virus Validation III (Bj√∂rn Keiner); Visual Control (Joanna Madafferi (Inherited)); Visual Control (Urs Pflugshaupt); Visual Inspection (Thomas Niedermann); Visual Inspection Precontrol 1 (Georges Schmid); Visual Inspection Precontrol 2 (Daniel Tobler); Visual Inspection and Packing (Claire Petitjean); Visual Inspection and Packing (Clare Schwarz); Visual Inspection semi final prod. 1 (Marlis Erb); Visual Inspection semi final prod.Team 2 (Yvonne Seiler); Visual Inspection semi final prod.Team 3 (Vesile Ciloglu); Visuelle Kontrolle 4 (Christina Vidal-Martinez); Visuelle Kontrolle 4 (Jrg Nickel (Inherited)); Visuelle Kontrolle 4 (J√∂rg Nickel (Inherited)); Vorbehandlung/Brdelung (Michael Welsch); Vorbehandlung/B√∂rdelung (Michael Welsch); WHO & EU Pandemic Vaccines (Ylenia Runci); Waco 084 (Katherine Blount); Waco 084 (Michael Pate Jr); Waco 084 ACM Area 1 (Janet E Jenkins); Waco 084 ACM Area 2 (Rachel I Ramirez); Waco 084 ACM Area 2 (Sharon A Smith); Waco 084 QA (Katherine Blount); Waco 084 QA (Vanessa E Tinsley (On Leave)); Waco 084 QA (Vanessa E Tinsley); Warehouse & Logistics Lengnau (Philipp Kaeser); Warehouse & Transportation (David Dunn); Warehouse & Transportation (Klaus M√ºller); Warehouse (Belinda Thomson); Warehouse (Sam Mekhael); Warehouse (Serge Marques); Warehouse (Uwe Klappstein); Warehouse II (Pavel Miller (On Leave)); Warehouse II (Pavel Miller); Warehouse Operations (Ritchii Lam (Inherited)); Warehouse Operations (Ritchii Lam); Warehouse Supervisor VIC 266 (John Turone (Inherited)); Warehouse Supervisor VIC 266 (Russell Monro); Warehousing (Brian Runner); Warehousing (Jesse Higgins); Warehousing (Noel Burash); Warehousing (Thomas Ryser); Warehousing (Union) (Brian Runner); Warehousing (Union) (Caitlyn Vidas); Warehousing (Union) (Jesse Higgins (Inherited)); Warehousing (Union) (Robin Anderson); Warehousing 1 (Brian Runner); Warehousing GBZ (Walter Kiener); Warehousing Non-Union (Brian Runner (Inherited)); Warehousing Non-Union (Robin Anderson); Warehousing U8 (Martin Hirschi); Warehousing U8 (Matthias Loosli); Warehousing U8 (Rafael Gasser); Warehousing U8 (Thomas Ryser (Inherited)); Warehousing W10 (Patrick Portmann); Warehousing W10 (Thomas Ryser); Warner Robins 509 (Denise Bloodsaw); Warner Robins 509 ACM Area 1 (Bernard Postell); Warner Robins 509 ACM Area 2 (Ta''Neshia Magby); Warner Robins 509 QA (Marilyn Walker); Warner Robins 509 QA (Mary A Paul (Inherited)); Warren 204 (Kimberly Schick); Warren 204 (Kimberly Wrenn); Warren 204 ACM Area 1 (Stephanie M Newland); Warren 204 ACM Area 2 (Daniel Rattay); Warren 204 QA (Jefferson Williams); Warren 204 QA (John Ziegler); Warren 204 QA (Samantha Rouzzo); Warwick 201 (Linda Monteiro); Warwick 201 (Matt Schramm); Warwick 201 ACM Area 1 (Mariela Myers); Warwick 201 ACM Area 2 (Husseim Gomez); Warwick 201 QA (Catherine Colucci); Warwick 201 QA (John L Thixton (Inherited)); Warwick 201 QA (Tessa Grassette); Water Testing (Heike Gocht); Water Testing (Partho Halder); Water Testing (Stefan Wilka); Waters-LAL (J Noel David); Waters/LAL (J Noel David); Weighmaster (Non-Union) (Jeff Keller); Weighmaster (Union) (Jeff Keller); Weighmaster (Union) (Jeffrey Keller); Weslaco 184 (Juan M Ramirez); Weslaco 184 ACM Area 1 (Antonio E Juarez); Weslaco 184 ACM Area 2 (Jesus R Hernandez II); Weslaco 184 QA (Ana Phlipot (On Leave)); Weslaco 184 QA (Ana Phlipot); West Lafayette 411 (Travis F Dill); West Lafayette 411 ACM Area 1 (Marc Baldwin); West Lafayette 411 ACM Area 2 (Alex Visser); West Lafayette 411 QA (Sheryl A Pope); West Specialty Regional Sales (STEPHANIE BESLER); West Specialty Regional Sales (Stephanie Besler); Westland 226 (Corey M Schimming); Westland 226 (Remie T Ray); Westland 226 ACM Area 1 (Kelsie Cremeans); Westland 226 ACM Area 2 (Kirk P Alford II); Westland 226 QA (David Zagorowski); Westwego 153 (Jacqulynn Shankle); Westwego 153 ACM Area 1 (Jacqulynn Shankle); Westwego 153 ACM Area 1 (Nadia Y Grisby); Westwego 153 ACM Area 2 (Regena D Young); Westwego 153 QA (Amanda N Webre); Westwego 153 QA (Brandi N Clark (On Leave)); Westwego 153 QA (Jacqulynn Shankle); Westwego 153 QA (Joshua D Harper); Wholesaler Management (Hideki Yanagihashi ??? ?? - ????? ????); Wichita 263 (Laurie E Boothe); Wichita 263 ACM Area 1 (Sierra Lashbrook); Wichita 263 ACM Area 2 (Mandi Harris); Wichita 263 QA (Cameo Donerson); Wichita 415 (Junior Navarro); Wichita 415 (Sam P Emrich); Wichita 415 ACM Area 1 (Michelle B Duong); Wichita 415 ACM Area 2 (Joel Sutherland); Wichita 415 QA (Erin Shaver); Wichita 415 QA (Laurie E Boothe); Wichita 415 QA (Troy Lee Wheeler (Inherited)); Wilkes Barre 286 (Joseph Frackowiak); Wilkes Barre 286 ACM Area 1 (Cathy Gurzynski); Wilkes Barre 286 ACM Area 2 (Joseph Frackowiak (Inherited)); Wilkes Barre 286 ACM Area 2 (Renee Collins); Wilkes Barre 286 QA (Robin Williams); Willoughby Hills 222 (Frances Campbell (On Leave)); Willoughby Hills 222 (Frances Campbell); Willoughby Hills 222 ACM Area 1 (Amanda Fitzpatrick); Willoughby Hills 222 ACM Area 2 (Breana Brown); Willoughby Hills QA 222 (Bailee E White); Wilmington 228 (Alex Liang); Wilmington 228 (Jack Ellison); Wilmington 228 (John E Hunt (Inherited)); Wilmington 228 ACM Area 1 (Kenneth A Keitt Jr); Wilmington 228 ACM Area 2 (Wendy Dettloff); Wilmington 228 QA (Ben Ward); Wilmington 228 QA (Sam Whitehead); Wilton Manors 073 (Alan Maldonado (Inherited)); Wilton Manors 073 (Benjamin J Morris); Wilton Manors 073 (Michelle S DeCambre); Wilton Manors 073 (Nakia J Harlan); Wilton Manors 073 ACM Area 1 (Darcia Culmer); Wilton Manors 073 ACM Area 2 (Kurt S Tuckett); Wilton Manors 073 ACM Area 2 (Soo-Lin Chang); Wilton Manors 073 ACM Area 3 (Benjamin J Morris); Wilton Manors 073 ACM Area 3 (Nakia J Harlan (Inherited)); Wilton Manors 073 QA (Glenny Arvelaez); Wilton Manors 073 QA (Ryann Chapman); Winston-Salem 124 (Daniel Miclausi); Winston-Salem 124 (Javier Castillo); Winston-Salem 124 ACM Area 1 (Malcolm Childress); Winston-Salem 124 ACM Area 2 (Amanda Jarvis); Winston-Salem 124 ACM Area 2 (Maria Lopez); Winston-Salem 124 QA (Amanda Jarvis); Winston-Salem 124 QA (Beth Majewski); Winston-Salem 124 QA (Mario E Montoya); Wintel (Jason Christides); Witchita 263 (Laurie E Boothe); Witchita 263 QA (Cameo Donerson); Woodend Senior Operator (Brett Walker); Woodend Senior Operator (Lauren Redman); Woonsocket 295 (Catherine Colucci); Woonsocket 295 ACM Area 1 (Jonathan Chenot); Woonsocket 295 ACM Area 2 (Ashley Brown); Woonsocket 295 QA (Michaela Perry); Works Council CSL Behring GmbH (Bernd R√∂√üer); Works Council CSL Behring GmbH (Michael Schrder (Inherited)); Works Council CSL Behring GmbH (Michael Schr√∂der); Works Council CSL Behring GmbH (Reiner D√∂nges); Works Council Chairman (Reiner D√∂nges); Works Councils (Reiner D√∂nges (Inherited)); Wuhan Accounting (Daisy Yang ); Wuhan Accounting (Daisy Yang ????); Wuhan Accounting Finance (Amy Jin ????); Wuhan Accounting Finance (Janet Jin ); Wuhan Accounting Finance (Janet Jin ????); Wuhan Administration Management (CW) (Cris Wang ????? (Inherited)); Wuhan Administration Management (Cris Wang ); Wuhan Administration Management (Cris Wang ?????); Wuhan Administrative Management and Facility Engineering (Fred Pang ?????); Wuhan Administrative Management and Facility Engineering (zcan Campinar); Wuhan Administrative Management and Facility Engineering (√ñzcan Campinar); Wuhan Admistration (Shuiping Zhang ); Wuhan Admistration (Shuiping Zhang ?????); Wuhan Bacteriological Inspection and Animal Trial (CiCi Cheng ); Wuhan Bacteriological Inspection and Animal Trial (CiCi Cheng ????); Wuhan Bioanalytical Sciences (Ming Zuo ); Wuhan Bioanalytical Sciences (Ming Zuo ????); Wuhan Bottle Washing (Weibing Chen ); Wuhan Bottle Washing (Weibing Chen ?????); Wuhan Costing and Financial Planning (Jessie Gao ); Wuhan Costing and Financial Planning (Jessie Gao ?????); Wuhan Environmental Health Safety (Ryan Mao ); Wuhan Environmental Health Safety (Ryan Mao ?????); Wuhan Equipment Maintenance (Jianming Liu ); Wuhan Equipment Maintenance (Jianming Liu ?????); Wuhan Equipment Management (Ming Cao ); Wuhan Equipment Management (Ming Cao ????); Wuhan Equipment Operations (Jun Yin ); Wuhan Equipment Operations (Jun Yin ????); Wuhan Equipment Operations and Maintenance (Rory Yang ); Wuhan Equipment Operations and Maintenance (Rory Yang ?????); Wuhan Finance (Dereck Jiang ); Wuhan Finance (Dereck Jiang ????); Wuhan Human Resources (Grace Yu ????); Wuhan Human Resources Management (Alex Yang ); Wuhan Human Resources Management (Alex Yang ?????); Wuhan Inspection (CW) (Yuemei Huang ????? (Inherited)); Wuhan Inspection (Yuemei Huang ); Wuhan Inspection (Yuemei Huang ?????); Wuhan Manufactuirng Subpackaging Line (Chenyi Guo ); Wuhan Manufactuirng Subpackaging Line (Chenyi Guo ?????); Wuhan Manufacturing Production Management (Liutao Yin ?????); Wuhan Operations (Andrew Tang); Wuhan Packaging (Min Lin ????); Wuhan Physical & Chemical Inspection (Linda Lin ); Wuhan Physical & Chemical Inspection (Linda Lin ????); Wuhan Plasma Inspection (Basin Zhao ); Wuhan Plasma Inspection (Basin Zhao ?????); Wuhan Plasma Sourcing (Lixia He (Inherited)); Wuhan Plasma Sourcing (Qin Chen ????); Wuhan Plasma Sourcing Management (CW) (Lixia He ????? (Inherited)); Wuhan Plasma Sourcing Management (Lixia He ); Wuhan Plasma Sourcing Management (Lixia He ?????); Wuhan Plasma Sourcing Management (Zhibao Qian ?????); Wuhan Plasma and Bacteriological Inspection (Haibo Cheng ); Wuhan Plasma and Bacteriological Inspection (Haibo Cheng ?????); Wuhan Procurement (Chan Liu ????); Wuhan Procurement (Steve Hu ); Wuhan Procurement (Steve Hu ?????); Wuhan Production (Vince Tian ?????); Wuhan Production Management (Zhi Luo ????); Wuhan Production Manufacturing (Elias Francis); Wuhan Production Manufacturing (Ye Xin ????); Wuhan Production Manufacturing (Zhen Wang ????); Wuhan Production Operations (Ye Xin ); Wuhan Production Operations (Ye Xin ????); Wuhan Protein Separation (Songping Xie ); Wuhan Protein Separation (Songping Xie ?????); Wuhan QA Deviation (Ning Ding ); Wuhan QA Deviation (Ning Ding ????); Wuhan QA System (Grace Zhao ); Wuhan QA System (Grace Zhao ????); Wuhan QA Validation (Daoxin Zhu ); Wuhan QA Validation (Daoxin Zhu ?????); Wuhan Quality (Dina El-Emary); Wuhan Quality (Xiaohong Wang ?????); Wuhan Quality Control Inspection (Caixiang Liu ?????); Wuhan Quality Control Ruide (Juergen Liedtke); Wuhan Quality Management (Juergen Liedtke); Wuhan Quality Management (Vivian Zhang ????); Wuhan Quality Systems and Standards (Xiangyang Xia ); Wuhan Quality Systems and Standards (Xiangyang Xia ?????); Wuhan Research and Development (Amory Wang ?????); Wuhan Ruide Compliance (Emma Ma ?????); Wuhan Ruide EIA (Shangqu Shi ?????); Wuhan Ruide Equipment (Zhenzhong Huang ?????); Wuhan Ruide Facilities (Didi Li ?????); Wuhan Ruide Facilities Maintenance (Dexue Hu ?????); Wuhan Ruide QA System & Compliance (Bismarck Huang ?????); Wuhan Ruide Wastewater Treatment (Yuanhui Wang ?????); Wuhan Sales (Jason Xu ????? (Inherited)); Wuhan Sales (Lei Huang ????); Wuhan Solution Preparation (Deqing Mei ); Wuhan Solution Preparation (Deqing Mei ?????); Wuhan Subpackaging Management (Jun Liu ); Wuhan Subpackaging Management (Jun Liu ????); Wuhan Subpackaging Operations (Xin Tian ); Wuhan Subpackaging Operations (Xin Tian ????); Wuhan Technlogy and Quality Study (Lisa Liang ); Wuhan Technlogy and Quality Study (Lisa Liang ????); Wuhan Technology Study (Shawelo Xiao ????); Wuhan Translation team (Mabel Xu ); Wuhan Translation team (Mabel Xu ????); Wuhan Ultrafiltration (Jindi Zhou ); Wuhan Ultrafiltration (Jindi Zhou ?????); Wuhan Water Preparation (Zongrong Liu ); Wuhan Water Preparation (Zongrong Liu ?????); Wyoming 173 (Joe Hicks Jr); Wyoming 173 (Stephanie Gower); Wyoming 173 ACM Area 1 (Jessica Hurlbert); Wyoming 173 ACM Area 2 (AMINA MCPHERSON); Wyoming 173 QA (Brent DeHaan); Wyoming 173 QA (Jared Kurtz); Yeadon 280 (Dominique Holland); Yeadon 280 ACM Area 1 (Therese Algeo); Yeadon 280 ACM Area 2 (TB Bailey); Yeadon 280 QA (Nikki Shaw); York 148 (Brandi Boyles); York 148 ACM Area 1 (Brandi Boyles (Inherited)); York 148 ACM Area 1 (Scottie Johnson Jr); York 148 ACM Area 2 (Stephanie Henry); York 148 QA (Caitie Golubski); York 148 QA (Greg Warren); ZPL Plasma Operations (Klaus Rolshausen (Inherited)); askHR Service ¬ñ TA Support (Anna Tassone); askHR Service ¬ñ TA Support (James Meyer); askHR Shared Services - Tier 1 APAC (Mina Kelepouris); eClinical Operations (Charles Johnson); eR&D Business Support (Simone Dierkes); eSystems (Christina Berninger); eSystems - LIMS Management (Christina Berninger (Inherited)); eSystems - LIMS Management (Stephan Degroth); nan; rzte Berlin (Dorothee Knop); rzte Bielefeld (Dorothee Knop); rzte Braunschweig (Dorothee Knop); rzte Bremen (Dorothee Knop); rzte Frankfurt (Dorothee Knop); rzte Gttingen (Dorothee Knop); rzte Kiel (Dorothee Knop); rzte Nrnberg (Dorothee Knop); support engineer (Deepak Cherian (Inherited)); support engineer (Jamshed Patuck); support engineer (Satya Dara (Inherited)); √Ñrzte (Andreas Gehrich (Inherited)); √Ñrzte (Annette Pernitzsch (Inherited)); √Ñrzte (Claudia Habenicht (Inherited)); √Ñrzte (Heike Borchert); √Ñrzte (Kirsten Scheibel (Inherited)); √Ñrzte (Natascha Bock (Inherited)); √Ñrzte (Stephani Keltsch); √Ñrzte (Sven Schuhmann (Inherited)); √Ñrzte Berlin (Dorothee Knop); √Ñrzte Bielefeld (Dorothee Knop); √Ñrzte Braunschweig (Dorothee Knop); √Ñrzte Bremen (Dorothee Knop); √Ñrzte Frankfurt (Dorothee Knop); √Ñrzte G√∂ttingen (Dorothee Knop); √Ñrzte Kiel (Dorothee Knop); √Ñrzte N√ºrnberg (Dorothee Knop)' pipeline_tag: text-classification inference: true base_model: sentence-transformers/all-mpnet-base-v2 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 15 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 2 | <ul><li>'FLSA STATUS DESCR: Exempt; Non-Exempt; nan'</li><li>'Pay Rate Type: Hourly; Hourly Salary; Hourly/Salary; Salaried; Salary; nan'</li><li>'Employee Level: Executive; Exempt professional; Non-exempt professional; Supervisor/Manager; nan'</li></ul> | | 3 | <ul><li>'Is Manager<HIDE>: N; Y; nan'</li><li>'Job Level Name: Architect and Lead/Principal Individual Contributor; Architect and¬†Lead/Principal¬†Individual Contributor; Associate/Intern; Chief Architect/Technical Fellow; Chief Operating Officer; Director; EVP; Fellow and Chief Architect; Group President/Sr EVP; Individual Contributor; People Manager/Sr Mgr; President and CEO; SVP; Senior Individual Contributor; Senior Lead/Principal Architect; Sr EVP Chief Financial Officer; Supervisor; Vice President/Counsel/Controller; nan'</li><li>'Is Manager: 0; 1; N; No; Y; Yes; nan'</li></ul> | | 5 | <ul><li>'Function/MAG: nan'</li><li>'Functional Pipeline: Communications; Corporate & Government Affairs; Corporate Services; Data Analytics; Design; Digital; Finance; General Management; Human Resources; Legal; Logistics & Services; Manufacturing & Sourcing; Marketing; Merchandising; Product Creation; Product Management; Program/Process Excellence; Retail; Sales; Sports Marketing; Strategic Planning; Technology; Unassigned; Unknown; nan'</li><li>'JobClassDescription: ACCOUNTANTS - DEGREED; ADMINISTRATIVE SUPPORT; AIDES/ORDERLIES; CLERICAL OFFICE SUPPORT; CLINICAL SUPPORT; DRIVERS; EMPLOYED PHYSICIANS; HOME HEALTH CARE - AIDES; HOME HEALTH CARE - LVN; HOME HEALTH CARE - RN; LICENSED REGISTERED NURSES; LICENSED VOCATIONAL/PRACTICAL NURSES; MANAGERS; NON-PHYSICIAN MEDICAL PRACTITIONERS; OTHER PHYSICIANS; PHYSICIAN PRACTICE - LVN; PHYSICIAN PRACTICE - RN; Physicians (With Benefits); SUPERVISORS; SUPPORT SERVICES PATIENT CARE; TECHNICAL SUPPORT; TECHNICIANS; TECHNOLOGISTS/THERAPISTS; nan'</li></ul> | | 10 | <ul><li>'Corp State: Alabama; Arizona; California; Colorado; Connecticut; Delaware; District of Columbia; Florida; Georgia; Idaho; Illinois; Indiana; Iowa; Is ; Kansas; Kentucky; Louisiana; Maine; Maryland; Massachusetts; Michigan; Milan; Minnesota; Mississippi; Missouri; Montana; Nebraska; Nevada; New Hampshire; New Jersey; New Mexico; New York; North Carolina; Ohio; Oklahoma; Oregon; Pennsylvania; Rhode Island; South Carolina; South Dakota; Tennessee; Texas; Turin; Utah; Virginia; Washington; West Virginia; Wisconsin; nan'</li><li>"location__stateprovince: ??mm?n; Aargau; Agrigento; Aguascalientes; Aichi; Alabama; Alaska; Alberta; Alessandria; Alexandria; Aosta; Arizona; Arkansas; Auckland; Baden-Wurttemberg; Bangkok; Bari; Bavaria; Beijing; Bergamo; Bologna; Brescia; British Columbia; Buenos Aires; Busan; Cagliari; California; Canterbury; Caserta; Catania; Cebu; Central Singapore; Changhua County; Chiayi City; Ciudad de Mexico; Coahuila; Colorado; Como; Connecticut; Cortes; Delaware; District of Columbia; Distrito Federal; Dubai; Estado de M√©xico; Ferrara; Firenze; Florida; Fujian; Fukuoka; Genova; Georgia; Gifu; Graubunden; Guanajuato; Guangdong; Guatemala; Haryana; Hawaii; Hawke's Bay; Hiroshima; Ho Chi Minh; Hokkaido; Hong Kong Island; Hsinchu City; Hubei; Ibaraki; Idaho; Ilan County; Illinois; Indiana; Iowa; Ishikawa; Jakarta Raya; Jalisco; Jiangsu; Johor; Kanagawa; Kansas; Kaohsiung City; Karnataka; Kentucky; Kowloon; Lecce; Liaoning; Livorno; Louisiana; Maharashtra; Maine; Managua; Manitoba; Maryland; Massachusetts; Melaka; Messina; Miaoli County; Michigan; Milano; Minnesota; Mississippi; Missouri; Montana; Monza e Brianza; Morelos; Nagano; Napoli; Nebraska; Nevada; New Hampshire; New Jersey; New Mexico; New South Wales; New Taipei City; New Territories; New York; Newfoundland and Labrador; North Carolina; North Dakota; North Rhine-Westphalia; Nova Scotia; Novara; Nuevo Le√≥n; Ohio; Oklahoma; Ontario; Oregon; Osaka; Otago; PG_Asia_CHN_01; PG_Asia_HKG_01; PI CHE - VF International; Padova; Pahang; Panam√°; Parma; Pennsylvania; Phnom Penh; Piacenza; Pingtung County; Puebla; Puerto Rico; Pulau Pinang (Penang); Quebec; Quer√©taro; Quintana Roo; Rhode Island; Roma; Saitama; Salary; San Jose; San Salvador; Santiago; Saskatchewan; Selangor; Seoul; Shandong; Shanghai; Shanxi; Shizuoka; Sichuan; South Carolina; S√£o Paulo; Tabasco; Taichung City; Tainan City; Taipei City; Tamil Nadu; Taoyuan City; Tennessee; Texas; Tianjin; Ticino; Tochigi; Tokyo; Torino; Toyama; Treviso; Trieste; Utah; Varese; Venezia; Veracruz; Vermont; Verona; Vicenza; Victoria; Virginia; Washington; Wellington; West Virginia; Wilayah Persekutuan Kuala Lumpur; Wilayah Persekutuan Putrajaya; Wisconsin; Wyoming; Yucat√°n; Zhejiang; nan"</li><li>'Home State | Province | Region: Alabama; Arkansas; Bogot√° D.C.; California; Colorado; Delaware; Distrito Federal; Eastern Cape; England; Florida; Gauteng; Georgia; Illinois; Indiana; Iowa; Kentucky; KwaZulu-Natal; Maine; Mexico State; Michigan; Minnesota; Missouri; Nevada; New Hampshire; New Jersey; New York; North Carolina; Ohio; Oregon; Pennsylvania; Puerto Rico; Santander; South Carolina; Tennessee; Texas; Valle del Cauca; Virginia; Washington; Western Cape; Wisconsin; nan'</li></ul> | | 12 | <ul><li>'Tenure Category: 0 - 3 Months; 10 - 12 Months; 10 - 15 Years; 13 - 28 Months; 15 - 20 Years; 19 - 24 Months; 2 - 3 Years; 20+ Years; 3 - 5 Years; 4 - 6 Months; 5 - 10 Years; 7 - 9 Months; nan'</li><li>'Tenure with the Company: 0-3 months; 1-2 years; 11-15 years; 16-20 years; 21-25 years; 26-30 years; 3-5 years; 31 years or more; 4-6 months; 6-10 years; 7-12 months; nan'</li><li>'TENURE - Hire: 1 - 2 years; 11 - 15 years; 16 - 20 years; 3 - 5 years; 6 - 10 years; Less than 1 year; More than 20 years; nan'</li></ul> | | 6 | <ul><li>'Location (Geographic): Argentina; Australia; Brazil; Canada; Canada - Living Sounds; China; China - Beijing; China - Suzhou; China, Beijing; China, Suzhou; Colombia; Dubai; France; Germany; Hungary; India; Israel; Italy; Japan; Kenya; Korea; Malaysia; Mexico, Matamoros; Mexico, Mexico City; New Zealand; Norway; Peru; Phillipines; Poland; Prefer not to answer; Romania; Singapore; United Kingdom; United States; nan'</li><li>'Country Name: Australia; Belgium; Brazil; Canada; Colombia; Costa Rica; France; India; Ireland; Italy; Luxembourg; Mexico; Netherlands; New Zealand; Philippines; Poland; Puerto Rico; Singapore; Spain; United Kingdom; United States of America; nan'</li><li>'Operating Company: MHPS-EDE; nan'</li></ul> | | 9 | <ul><li>"HR Site Group<HIDE>: 84 SOUTH HEALTH CENTER; ACL LABS; ADVOCATE MEDICAL GROUP; AMC Bay Area; AMC GRAFTON; AMC KENOSHA; AMC MANITOWOC COUNTY; AMC OSHKOSH; AMC SUMMIT; AMC WASHINGTON COUNTY; AMG CONTACT CENTER; APP; AURORA BAYCARE MEDICAL CENTER; AURORA CLINICAL CONTACT CENTER; AURORA LAKELAND MEDICAL CENTER; AURORA MEMORIAL HOSPITAL OF BURLINGTON; AURORA PSYCH/BEHAVIORAL HEALTH; Aurora Health Care Medical Group : GBMM; Aurora Health Care Medical Group : GMNSC; Aurora Health Care Medical Group : GMS; Aurora Health Care Medical Group : OFDL; Aurora Health Care Medical Group : OTHER; Aurora Health Care Medical Group : RKL; Aurora Health Care Medical Group : SCWNWC; Aurora Health Care Medical Group : WJ; BROMENN MEDICAL CENTER/EUREKA; CHRIST MEDICAL CENTER; CONDELL MEDICAL CENTER; CORPORATE; Children's Hospital; GOOD SAMARITAN HOSPITAL; GOOD SHEPHERD HOSPITAL; ILLINOIS MASONIC MEDICAL CENTER; LUTHERAN GENERAL HOSPITAL; POST ACUTE NETWORK; SHEBOYGAN MEMORIAL; SHERMAN HOSPITAL; SINAI MEDICAL CENTER; SOUTH SUBURBAN HOSPITAL; ST. LUKE'S SOUTH SHORE; SYSTEM ANCILLARY SERVICES; SYSTEM SUPPORT SERVICES; St. Luke's Medical Center; TRINITY HOSPITAL; WEST ALLIS MEDICAL CENTER; nan"</li><li>'Affiliation(Affiliate): 9010 Admin; BCHC; BVRMC; Cherokee Regional Medical Center; Dubuque VNA; FORT DODGE REGION; GREENE COUNTY MEDICAL CENTER; Grundy Cnty Hospital; HCF Inc; Hancock County Senior Services; IA Health Acc Care; MERITER; Memorial Hospital; Pocahontas Community Hospital; Stewart Memorial Community Hospital; Sumner Comm Hospital; UP Clinic Affiliate; UP at Home Affiliate; UPC Peoria; UPH Allen; UPH CR St Lukes; UPH Contract Svc LC; UPH Des Moines; UPH FD Trinity Hlth; UPH FD Trinity Regnl; UPH Finley; UPH Grinnell; UPH Jones Regional; UPH Marshalltown; UPH Methodist; UPH Methodist Colleg; UPH Musc Trinity; UPH Pekin; UPH Proctor; UPH QC Trinity; UPH SC St Lukes; UPH at Work; UPH at Work QC; UnityPlace; UnityPoint Health - Keokuk; Virginia Gay Hospital; nan'</li><li>'region: ACP; CALIFORNIA; CAROLINAS; CENTRAL ZONE RM; EAST PENN/DE; EAST ZONE RM; FLORIDA; GREAT PLAINS; HANGER CLINIC SHARED SVCS; HANGER RESOURCE CENTER; HEARTLAND; HOUSTON; KEYSTONE; MICHIGAN; MID ATLANTIC; MIDWEST; NATL LABS; NEW ENGLAND; NORTH ATLANTIC; NORTHWEST; NY METRO; NY/NJ; OHIO VALLEY; ROCKY MOUNTAIN; SOUTH CENTRAL; SOUTHEAST; SOUTHWEST; SPS; TEXAS; WEST ZONE RM; nan'</li></ul> | | 4 | <ul><li>'Union Status <HIDE>: I am a member of a union; I am not a member of union; nan'</li><li>"Union Code: 122; 17; 399; 420; 781; AFSCME; AFSCME Local 3279; AFSCME Local 9; ALT; ARSA; Appointed; BDN; BKV; BLE; BMWE; BOL; BRCA; BRD; BRS; BRW; CAF; CAP; CAT; CAW; CBI; CBT; CE1; CE2; CE3; CE4; CEC; CED; CEL; CEN; CEQ; CET; CFC; CFF; CFO; CLB; CMA; CMN; CNA; CNR; COL; CPA; CPE; CPL; CPO; CPT; CSC; CSE; CSU; CTM; CTS; CVT; CX1; CX2; CX3; CX4; DBS; DVR; FNK; FRE; FRS; G01; G02; G04; G05; G06; G07; G08; G09; G10; G11; G12; G13; G14; G15; G16; G17; GCH; GGI; GGO; GGR; GVL; HTC; IAM; IBBB; IBEW; IBFO; IBT Lab Asst; IBT Lab Couriers; IBT PMCA Childrens; IBW; IDA; IUOE; IW; JOA AFSCME/SEIU HCMI; KU1; KU2; Laundry Workers Loca; Local 320; Local 321; Local 363; Local 49ers; MDL; MDX; MNA; MOD; MOO; MUR; Muldraugh Compressor Station; N01; N02; N03; NON; NUHW; NUR; None ; OIL; Operating Engineers; PNT; Police; R01; R02; R04; R05; R06; R10; R11; R12; R13; R15; R16; R17; R18; R19; R20; R22; R23; R24; R25; R26; R27; R31; R32; R33; R35; R36; R37; R38; R39; R40; R41; R42; R45; R46; R47; R48; R49; R50; R52; R55; R56; R57; R58; R59; RFT; RPT; RSP; SCP; SEC; SEIU; SEIU - PCA's at RIM; SMW; SPNNUNAC; STF; Service Emp Intn'l U; TCU; TCUASR; TCU_ICTF; TEAM; TSV; Trades; U; U01; U02; U04; U05; U06; U07; U10; U14; U19; U21; U22; U23; U24; U25; U26; U32; U37; U43; U44; U52; U53; U56; U76; U78; U83; U84; U85; U91; UA3; UAW Local 889; UB7; UB9; UC3; UD7; UD8; UE3; UE5; UE9; UF1; UF2; UF3; UF4; UFCW; UG1; UG5; UN5; UN6; UN7; UN8; UN9; UNAC; UNKNOWN; UPUYC-SP; UPUYC-UP; UTUC; UTUE; UTUT; UTUY-A&S; W02; W03; W04; W05; WC5; YRK; nan"</li></ul> | | 16 | <ul><li>'Shift Question<HIDE>: Yes; nan'</li><li>'Work Schedule: 0-30 Hrs Wk; 0-38.5 Hrs Wk; 1 - 10%; 10 FR-MO; 10 M-TH; 10 M-TU TH-FR; 10 M-WE FR; 10 SU-WED; 10 TU-FR; 10 WE - SA; 11 - 20%; 1ST SHIFT; 21 - 30%; 2ND SHIFT; 31 - 40%; 37 Hrs Wk; 37.5 Hrs Wk; 38 Hrs Wk; 38.5 Hrs Wk; 39 Hrs Wk; 3RD SHIFT; 40 Hrs Wk; 41 - 50%; 41 Hrs Wk; 42.5 Hrs Wk; 44 Hrs Wk; 45 Hrs Wk; 48 Hrs Wk; 5/8 FR-SA Off; 5/8 MO-TU Off; 5/8 SU-MO Off; 5/8 TH-FR Off; 5/8 TU-WE Off; 5/8 WE-TH Off; 51 - 60%; 61 - 70%; 71 - 80%; 8 HRS 8am-5pm; 81 - 90%; 91 - 100%; KRONOS SHIFT 1; KRONOS SHIFT 2; KRONOS SHIFT 3; Mon-Fri 40 Hrs/Wk; OPS FLOOR Mo-Th-Fr; PART-TIME VERT. 60,5%; Part-time Oriz. 50%; Part-time Oriz. 60%; Part-time Oriz. 62%; Part-time Oriz. 62,5%; Part-time Oriz. 75%; Part-time Oriz. 87,5%; STANDARD 8-5; STANDARD 8:30am-5pm; STANDARD 8am - 5pm; STANDARD 9-5.30; STANDARD 9am - 6pm; STANDARD 9am-6pm; Service Tech. Field; TURNISTA; Turno PT Orizz. 75%; Turno PT Orizz. 87,5%; nan'</li><li>'Shift<HIDE>: 7; B; D; E; L; O; R; W; Z; nan'</li></ul> | | 14 | <ul><li>'What has been your COVID 19 work arrangement?<HIDE>: Furloughed/Closed Location; Office; Other; Reduced Work Schedule; Remote/Work from Home; nan'</li><li>'Ability to Work Remotely<HIDE>: My primary job role can be done remotely with little or no disruption.; My primary job role is a mix - some can be done from anywhere and some can only be done from the physical work location.; My primary job role requires me to be physically present in my workplace.; nan'</li></ul> | | 1 | <ul><li>'Race_Ethnicity: American Indian or Alaska Native (Not Hispanic or Latino) (United States of America); American Indian or Alaskan Native (United States of America); American¬†Indian¬†or¬†Alaskan¬†Native (United States of America); Asian (Not Hispanic or Latino) (United States of America); Asian (United States of America); Asian - Indian (United Kingdom); Asian - Other (United Kingdom); Asian - Pakistani (United Kingdom); Bai (China); Black - African (United Kingdom); Black - British (United Kingdom); Black - Caribbean (United Kingdom); Black or African American (Not Hispanic or Latino) (United States of America); Black or African American (United States of America); Black¬†or¬†African¬†American (United States of America); Buyei (China); Chinese (Singapore); Dai (China); Dong (China); Han (China); Hani (China); Hispanic or Latino (United States of America); Hispanic/Latino (United States of America); I do not wish to answer. (United States of America); Indian (Singapore); Li (China); Malay (Singapore); Native Hawaiian or Other Pacific Islander (Not Hispanic or Latino) (United States of America); Native Hawaiian or Other Pacific Islander (United States of America); Native¬†Hawaiian¬†or¬†Other¬†Pacific¬†Islander (United States of America); Not Declaring (United Kingdom); Not Reported; Other (Singapore); Other (United Kingdom); Tujia (China); Two or More Races (Not Hispanic or Latino) (United States of America); Two or More Races (United States of America); Two¬†or¬†More¬†Races (United States of America); White (Not Hispanic or Latino) (United States of America); White (United States of America); White - British (United Kingdom); White - Irish (United Kingdom); White - Other (United Kingdom); White - Other European (United Kingdom); Yi (China); Zhuang (China); nan'</li><li>'Which ethnicity/ethnicities do you most identify with?: Asian; Black; Hispanic or Latino; Other; Prefer not to respond; Two or More Races; White; nan'</li><li>'Ethnicity On File: 2 or more races, not Hispanc; American Indian/Alaska Nativ; Asian; Black/African American; Hispanic/Latino; Native Hawaiian/Oth Pacif Is; White; nan'</li></ul> | | 7 | <ul><li>'FM_Merger_Cd: N; Y; nan'</li><li>'Acquisition Hire<HIDE>: Acquisition Hire; Non-Acquisition Hire; nan'</li></ul> | | 8 | <ul><li>'Primary Termination Reason: Retained; Terminate Associate > Involuntary > Attendance; Terminate Associate > Involuntary > Death; Terminate Associate > Involuntary > Elimination of Position; Terminate Associate > Involuntary > Exhaustion of Leave; Terminate Associate > Involuntary > Falsification of Records; Terminate Associate > Involuntary > Gross Misconduct; Terminate Associate > Involuntary > Mutual Consent; Terminate Associate > Involuntary > Not re-new contract; Terminate Associate > Involuntary > Poor Job Performance; Terminate Associate > Involuntary > Severance; Terminate Associate > Involuntary > Tardiness; Terminate Associate > Involuntary > Violation of Rules; Terminate Associate > Involuntary > Workforce Reduction; Terminate Associate > Voluntary > Commute Time; Terminate Associate > Voluntary > Company Instability; Terminate Associate > Voluntary > Dissatisfied with Hours; Terminate Associate > Voluntary > Dissatisfied with Job; Terminate Associate > Voluntary > Dissatisfied with Management; Terminate Associate > Voluntary > Dissatisfied with Pay; Terminate Associate > Voluntary > Dissatisfied with Promotional Opportunities; Terminate Associate > Voluntary > Dissatisfied with Working Conditions; Terminate Associate > Voluntary > Failure to Return from Leave; Terminate Associate > Voluntary > Job Abandonment; Terminate Associate > Voluntary > Military Service; Terminate Associate > Voluntary > Moved; Terminate Associate > Voluntary > Other Employment; Terminate Associate > Voluntary > Personal; Terminate Associate > Voluntary > Retirement; Terminate Associate > Voluntary > Return to School; Terminate Associate > Voluntary > Severance; Terminate Associate > Voluntary > Unknown; Terminate Employee > Voluntary > Benefits; Terminate Employee > Voluntary > Career Change; Terminate Employee > Voluntary > Career Development or Advancement; Terminate Employee > Voluntary > Compensation; Terminate Employee > Voluntary > Continue Education; Terminate Employee > Voluntary > Contract End; Terminate Employee > Voluntary > Conversion; Terminate Employee > Voluntary > Dislike Company; Terminate Employee > Voluntary > Dislike Hours/Schedule; Terminate Employee > Voluntary > Dislike Supervisor; Terminate Employee > Voluntary > Dislike Work; Terminate Employee > Voluntary > Dissatisfied Career Advancement Opportunities; Terminate Employee > Voluntary > Dissatisfied with Benefits; Terminate Employee > Voluntary > Dissatisfied with Benefits Package (Health, Dental, Vision, Life, Retirement, Paid Leave, etc.); Terminate Employee > Voluntary > Dissatisfied with Career Opportunities; Terminate Employee > Voluntary > Dissatisfied with Company Policies; Terminate Employee > Voluntary > Dissatisfied with Compensation Package (Base Salary, Bonus, Commissions, etc.); Terminate Employee > Voluntary > Dissatisfied with Coworkers; Terminate Employee > Voluntary > Dissatisfied with Flexible Work Arrangements (remote work, flexible hours, etc.); Terminate Employee > Voluntary > Dissatisfied with Hours / Schedule; Terminate Employee > Voluntary > Dissatisfied with Industry; Terminate Employee > Voluntary > Dissatisfied with Job; Terminate Employee > Voluntary > Dissatisfied with Location; Terminate Employee > Voluntary > Dissatisfied with Location/Commute; Terminate Employee > Voluntary > Dissatisfied with Management; Terminate Employee > Voluntary > Dissatisfied with Manager Effectiveness; Terminate Employee > Voluntary > Dissatisfied with Organization Culture (Corporate Values, Behaviors, Norms that Guide How People Work); Terminate Employee > Voluntary > Dissatisfied with Pay; Terminate Employee > Voluntary > Dissatisfied with Travel; Terminate Employee > Voluntary > Dissatisfied with Type of Work; Terminate Employee > Voluntary > Dissatisfied with Work Conditions; Terminate Employee > Voluntary > Dissatisfied with Working Conditions; Terminate Employee > Voluntary > Dissatisfied with Worklife Balance; Terminate Employee > Voluntary > Exit Workforce; Terminate Employee > Voluntary > Failed to Return from Leave; Terminate Employee > Voluntary > Failure to Return from Leave; Terminate Employee > Voluntary > Family Obligations; Terminate Employee > Voluntary > Family Reasons; Terminate Employee > Voluntary > Health Concerns; Terminate Employee > Voluntary > Health Reasons; Terminate Employee > Voluntary > Job Abandonment; Terminate Employee > Voluntary > Job Security; Terminate Employee > Voluntary > Join Military; Terminate Employee > Voluntary > Location; Terminate Employee > Voluntary > Military Service; Terminate Employee > Voluntary > Moved; Terminate Employee > Voluntary > Mutual Agreement (inactive); Terminate Employee > Voluntary > Mutual Consent; Terminate Employee > Voluntary > Never Reported for Orientation; Terminate Employee > Voluntary > Other Employment; Terminate Employee > Voluntary > Personal - Furthering Education (inactive); Terminate Employee > Voluntary > Personal Reasons; Terminate Employee > Voluntary > Relocation; Terminate Employee > Voluntary > Resignation; Terminate Employee > Voluntary > Retirement; Terminate Employee > Voluntary > Return to School; Terminate Employee > Voluntary > Returned to School; Terminate Employee > Voluntary > Self Employment; Terminate Employee > Voluntary > Training; Terminate Employee > Voluntary > Transportation Problems; Terminate Employee > Voluntary > Unknown; Terminate Employee > Voluntary > Work Authorization Not Renewed; Terminate Employee > Voluntary > Workload; nan'</li><li>'Termed Reason: I; V; nan'</li><li>'Voluntary or Retirement<HIDE>: Retirement; Voluntary; nan'</li></ul> | | 11 | <ul><li>'Generation: 18-24 years of age; 25-34 years; 25-34 years of age; 26-35 Yrs; 26-35 years; 35-44 years; 35-44 years of age; 36-45 Yrs; 36-45 years; 45-54 years of age; 45-55 years; 46-55 Yrs; 46-55 years; 55-64 years of age; 65+ years of age; < 25 years; < 26 Yrs; < 26 years; <25 years; > 55 Yrs; > 55 years; >55 years; Baby Boomer; Baby Boomer (born 1946 ¬ñ 1964); Baby Boomers; Baby Boomers ¬ñ 1946 ¬ñ 1965; Gen X; Generation X; Generation X (born 1965 to 1980); Generation X ¬ñ 1965 ¬ñ 1980; Generation Y / Millennials ¬ñ 1981 ¬ñ 1996; Generation Z; Generation Z (born 2001 to 2015); Generation Z ¬ñ 1997 and onwards; Mature (born in 1945 or earlier); Millennial; Millennials; Millennials (born 1981 to 2000); Silent Generation; Silent Generation - 1928 ¬ñ 1945; Traditionalist; nan'</li><li>'Age Bracket: 119.4; 18-24; 18.5; 18.7; 19.1; 19.2; 19.3; 19.4; 19.5; 19.6; 19.7; 19.8; 19.83333333; 19.9; 20 - 29; 20-24; 20-30 Years; 20.3; 20.6; 20.66666667; 20.7; 20.83333333; 20.9; 21; 21.08333333; 21.1; 21.16666667; 21.2; 21.3; 21.4; 21.5; 21.6; 21.66666667; 21.7; 21.8; 21.83333333; 21.9; 22; 22.1; 22.2; 22.3; 22.33333333; 22.4; 22.41666667; 22.5; 22.58333333; 22.6; 22.66666667; 22.7; 22.75; 22.8; 22.9; 23; 23.08333333; 23.1; 23.16666667; 23.2; 23.25; 23.3; 23.33333333; 23.4; 23.41666667; 23.5; 23.58333333; 23.6; 23.7; 23.8; 23.83333333; 23.9; 23.91666667; 24; 24.1; 24.2; 24.3; 24.33333333; 24.4; 24.41666667; 24.5; 24.58333333; 24.6; 24.66666667; 24.7; 24.75; 24.8; 24.83333333; 24.9; 25; 25-30; 25-35; 25-35 ; 25.08333333; 25.1; 25.16666667; 25.2; 25.25; 25.3; 25.33333333; 25.4; 25.41666667; 25.5; 25.58333333; 25.6; 25.66666667; 25.7; 25.75; 25.8; 25.83333333; 25.9; 25.91666667; 26; 26-3; 26-35; 26.08333333; 26.1; 26.16666667; 26.2; 26.25; 26.3; 26.33333333; 26.4; 26.41666667; 26.5; 26.58333333; 26.6; 26.66666667; 26.7; 26.75; 26.8; 26.83333333; 26.9; 26.91666667; 27; 27.08333333; 27.1; 27.16666667; 27.2; 27.25; 27.3; 27.33333333; 27.4; 27.41666667; 27.5; 27.58333333; 27.6; 27.66666667; 27.7; 27.75; 27.8; 27.83333333; 27.9; 27.91666667; 28; 28.08333333; 28.1; 28.16666667; 28.2; 28.25; 28.3; 28.33333333; 28.4; 28.41666667; 28.5; 28.58333333; 28.6; 28.66666667; 28.7; 28.75; 28.8; 28.83333333; 28.9; 28.91666667; 29; 29.08333333; 29.1; 29.16666667; 29.2; 29.25; 29.3; 29.33333333; 29.4; 29.41666667; 29.5; 29.58333333; 29.6; 29.66666667; 29.7; 29.75; 29.8; 29.83333333; 29.9; 29.91666667; 30; 30 - 39; 30-40 Years; 30.08333333; 30.1; 30.16666667; 30.2; 30.25; 30.3; 30.33333333; 30.4; 30.41666667; 30.5; 30.58333333; 30.6; 30.66666667; 30.7; 30.75; 30.8; 30.83333333; 30.9; 30.91666667; 31; 31-40; 31.08333333; 31.1; 31.16666667; 31.2; 31.25; 31.3; 31.33333333; 31.4; 31.41666667; 31.5; 31.58333333; 31.6; 31.66666667; 31.7; 31.75; 31.8; 31.83333333; 31.9; 31.91666667; 32; 32.08333333; 32.1; 32.16666667; 32.2; 32.25; 32.3; 32.33333333; 32.4; 32.41666667; 32.5; 32.58333333; 32.6; 32.66666667; 32.7; 32.75; 32.8; 32.83333333; 32.9; 32.91666667; 33; 33.08333333; 33.1; 33.16666667; 33.2; 33.25; 33.3; 33.33333333; 33.4; 33.41666667; 33.5; 33.58333333; 33.6; 33.66666667; 33.7; 33.75; 33.8; 33.83333333; 33.9; 33.91666667; 34; 34.08333333; 34.1; 34.16666667; 34.2; 34.25; 34.3; 34.33333333; 34.4; 34.41666667; 34.5; 34.58333333; 34.6; 34.66666667; 34.7; 34.75; 34.8; 34.83333333; 34.9; 34.91666667; 35; 35.08333333; 35.1; 35.16666667; 35.2; 35.25; 35.3; 35.33333333; 35.4; 35.41666667; 35.5; 35.58333333; 35.6; 35.66666667; 35.7; 35.75; 35.8; 35.83333333; 35.9; 35.91666667; 36; 36-40; 36-41; 36-45; 36.08333333; 36.1; 36.16666667; 36.2; 36.25; 36.3; 36.33333333; 36.4; 36.41666667; 36.5; 36.58333333; 36.6; 36.66666667; 36.7; 36.75; 36.8; 36.83333333; 36.9; 36.91666667; 37; 37.08333333; 37.1; 37.16666667; 37.2; 37.25; 37.3; 37.33333333; 37.4; 37.41666667; 37.5; 37.58333333; 37.6; 37.66666667; 37.7; 37.75; 37.8; 37.83333333; 37.9; 37.91666667; 38; 38.08333333; 38.1; 38.16666667; 38.2; 38.25; 38.3; 38.33333333; 38.4; 38.41666667; 38.5; 38.58333333; 38.6; 38.66666667; 38.7; 38.75; 38.8; 38.83333333; 38.9; 38.91666667; 39; 39.08333333; 39.1; 39.16666667; 39.2; 39.25; 39.3; 39.33333333; 39.4; 39.41666667; 39.5; 39.58333333; 39.6; 39.66666667; 39.7; 39.75; 39.8; 39.83333333; 39.9; 39.91666667; 40; 40 - 49; 40-50 Years; 40.08333333; 40.1; 40.16666667; 40.2; 40.25; 40.3; 40.33333333; 40.4; 40.41666667; 40.5; 40.58333333; 40.6; 40.66666667; 40.7; 40.75; 40.8; 40.83333333; 40.9; 40.91666667; 41; 41-49; 41-50; 41.08333333; 41.1; 41.16666667; 41.2; 41.25; 41.3; 41.33333333; 41.4; 41.41666667; 41.5; 41.58333333; 41.6; 41.66666667; 41.7; 41.75; 41.8; 41.83333333; 41.9; 41.91666667; 42; 42.08333333; 42.1; 42.16666667; 42.2; 42.25; 42.3; 42.33333333; 42.4; 42.41666667; 42.5; 42.58333333; 42.6; 42.66666667; 42.7; 42.75; 42.8; 42.83333333; 42.9; 42.91666667; 43; 43.08333333; 43.1; 43.16666667; 43.2; 43.25; 43.3; 43.33333333; 43.4; 43.41666667; 43.5; 43.58333333; 43.6; 43.66666667; 43.7; 43.75; 43.8; 43.83333333; 43.9; 43.91666667; 44; 44.08333333; 44.1; 44.16666667; 44.2; 44.25; 44.3; 44.33333333; 44.4; 44.41666667; 44.5; 44.58333333; 44.6; 44.66666667; 44.7; 44.75; 44.8; 44.83333333; 44.9; 44.91666667; 45; 45.08333333; 45.1; 45.16666667; 45.2; 45.25; 45.3; 45.33333333; 45.4; 45.41666667; 45.5; 45.58333333; 45.6; 45.66666667; 45.7; 45.75; 45.8; 45.83333333; 45.9; 45.91666667; 46; 46-54; 46.08333333; 46.1; 46.16666667; 46.2; 46.25; 46.3; 46.33333333; 46.4; 46.41666667; 46.5; 46.58333333; 46.6; 46.66666667; 46.7; 46.75; 46.8; 46.83333333; 46.9; 46.91666667; 47; 47.08333333; 47.1; 47.16666667; 47.2; 47.25; 47.3; 47.33333333; 47.4; 47.41666667; 47.5; 47.58333333; 47.6; 47.66666667; 47.7; 47.75; 47.8; 47.83333333; 47.9; 47.91666667; 48; 48.08333333; 48.1; 48.16666667; 48.2; 48.25; 48.3; 48.33333333; 48.4; 48.41666667; 48.5; 48.58333333; 48.6; 48.66666667; 48.7; 48.75; 48.8; 48.83333333; 48.9; 48.91666667; 49; 49.08333333; 49.1; 49.16666667; 49.2; 49.25; 49.3; 49.33333333; 49.4; 49.41666667; 49.5; 49.58333333; 49.6; 49.66666667; 49.7; 49.75; 49.8; 49.83333333; 49.9; 49.91666667; 50; 50 - 59; 50-60 Years; 50-64; 50.1; 50.16666667; 50.2; 50.25; 50.3; 50.33333333; 50.4; 50.41666667; 50.5; 50.58333333; 50.6; 50.66666667; 50.7; 50.75; 50.8; 50.83333333; 50.9; 50.91666667; 51; 51+; 51.08333333; 51.1; 51.16666667; 51.2; 51.25; 51.3; 51.33333333; 51.4; 51.41666667; 51.5; 51.58333333; 51.6; 51.66666667; 51.7; 51.75; 51.8; 51.83333333; 51.9; 51.91666667; 52; 52.08333333; 52.1; 52.16666667; 52.2; 52.25; 52.3; 52.33333333; 52.4; 52.41666667; 52.5; 52.58333333; 52.6; 52.66666667; 52.7; 52.75; 52.8; 52.83333333; 52.9; 52.91666667; 53; 53.08333333; 53.1; 53.16666667; 53.2; 53.25; 53.3; 53.33333333; 53.4; 53.41666667; 53.5; 53.58333333; 53.6; 53.66666667; 53.7; 53.75; 53.8; 53.83333333; 53.9; 53.91666667; 54; 54.08333333; 54.1; 54.16666667; 54.2; 54.25; 54.3; 54.33333333; 54.4; 54.41666667; 54.5; 54.58333333; 54.6; 54.66666667; 54.7; 54.75; 54.8; 54.83333333; 54.9; 54.91666667; 55; 55+; 55.08333333; 55.1; 55.16666667; 55.2; 55.25; 55.3; 55.33333333; 55.4; 55.5; 55.58333333; 55.6; 55.66666667; 55.7; 55.75; 55.8; 55.83333333; 55.9; 55.91666667; 56; 56.08333333; 56.1; 56.16666667; 56.2; 56.25; 56.3; 56.33333333; 56.4; 56.41666667; 56.5; 56.58333333; 56.6; 56.66666667; 56.7; 56.75; 56.8; 56.83333333; 56.9; 56.91666667; 57; 57.08333333; 57.1; 57.16666667; 57.2; 57.25; 57.3; 57.4; 57.41666667; 57.5; 57.6; 57.66666667; 57.7; 57.75; 57.8; 57.83333333; 57.9; 57.91666667; 58; 58.08333333; 58.1; 58.16666667; 58.2; 58.25; 58.3; 58.33333333; 58.4; 58.5; 58.58333333; 58.6; 58.7; 58.75; 58.8; 58.83333333; 58.9; 58.91666667; 59; 59.08333333; 59.1; 59.16666667; 59.2; 59.25; 59.3; 59.33333333; 59.4; 59.41666667; 59.5; 59.58333333; 59.6; 59.7; 59.75; 59.8; 59.83333333; 59.9; 59.91666667; 6; 60; 60 and over; 60.08333333; 60.1; 60.16666667; 60.2; 60.3; 60.33333333; 60.4; 60.41666667; 60.5; 60.6; 60.7; 60.75; 60.8; 60.83333333; 60.9; 60.91666667; 61; 61.1; 61.16666667; 61.2; 61.25; 61.3; 61.4; 61.5; 61.58333333; 61.6; 61.66666667; 61.7; 61.75; 61.8; 61.83333333; 61.9; 61.91666667; 62; 62.1; 62.16666667; 62.2; 62.25; 62.3; 62.33333333; 62.4; 62.41666667; 62.5; 62.58333333; 62.6; 62.66666667; 62.7; 62.8; 62.9; 62.91666667; 63; 63.08333333; 63.1; 63.2; 63.25; 63.3; 63.4; 63.5; 63.58333333; 63.6; 63.66666667; 63.7; 63.75; 63.8; 63.9; 63.91666667; 64; 64.08333333; 64.1; 64.2; 64.33333333; 64.4; 64.5; 64.58333333; 64.6; 64.7; 64.75; 64.8; 64.9; 64.91666667; 65; 65+; 65.1; 65.16666667; 65.2; 65.3; 65.4; 65.41666667; 65.5; 65.6; 65.66666667; 65.7; 65.75; 65.8; 65.83333333; 65.9; 65.91666667; 66; 66.1; 66.2; 66.3; 66.33333333; 66.4; 66.6; 66.7; 66.75; 66.8; 67; 67.08333333; 67.1; 67.2; 67.3; 67.4; 67.5; 67.58333333; 67.6; 67.66666667; 67.7; 67.9; 68; 68.2; 68.3; 68.33333333; 68.4; 68.5; 68.66666667; 68.7; 68.91666667; 69; 69.08333333; 69.3; 69.4; 69.7; 69.8; 69.9; 70; 70.08333333; 70.1; 70.2; 70.25; 70.4; 70.6; 70.7; 71.1; 71.3; 71.4; 71.5; 71.6; 71.9; 72.16666667; 72.5; 72.6; 72.75; 72.8; 73; 73.3; 73.6; 74.16666667; 74.2; 74.4; 75.7; 77.6; 79.25; 935.7; <20; <20 Years; <=25; >=60 Years; AAB00366417; AAB10011157; Baby Boomer; F; Gen X; Gen Y / Millennial; Less than 18; M; Traditionalist; Under 20; nan'</li><li>'age_band: 18-25; 20 - 29; 26-35; 30 - 39; 36-45; 40 - 49; 46-55; 50 - 59; 56 and above; 60 and over; Under 20; nan'</li></ul> | | 0 | <ul><li>'Employee_Gender: F; M; nan'</li><li>'O_Gender: F; Female; M; Male; U; Unknown; nan'</li><li>'Gender: -; 0; 1 Individual Contributor; 10 Individual Contributor; 119; 17; 18; 19; 20; 20-29; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 30-39; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 40-49; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 50-59; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 8 Sr. Manager; 80; 81; 83; 88; 89; 9 Manager; 9 manager; 90; 91; 935; ?; Agender; Asian; Bay; Bayan; Choose not to respond; Contractor; D; DJO Export; Decline; Decline to State; Decline to answer; Decline to state; Director; Do Not Wish to Disclose; F; F ; FEMALE; FEMALE ; Female; Female ; Female ; Femenino; F√©minin; Gender; Gender Nonconforming; Gender non-conforming; Gender variant / non-conforming; I Choose Not to Self Disclose; I Choose not to Disclose; I DO NOT WISH TO SELF-IDENTIFY; I Prefer Not to Answer; I choose not to disclose; I choose not to reply; I decline to self-identify; I do not wish to Self-Identify; I prefer not to answer; I prefer not to say; I prefer to self-describe in another way:; Identity Not Listed; In Another Way; JOANNE STURGESS; JODIE FIDDES; M; M ; MALE; MASCOLINO; Make; Male; Male ; Male ; Masculin; Masculino; N; Non-Binary; Non-Binary/Third Gender; Non-binary; Non-binary / third gender; Non-binary/ third gender; Non-specific; Nonconform; None; Not Available; Not Declared; Not Declaring; Not Disclosed; Not SpeciFemaleied; Not Specifed; Not Specified; Not assigned; Not available; Not declared; Not known; Not_Declared; Not_declared; O; Other; Prefer Not To Answer; Prefer Not To Say; Prefer Not To Self-Identify; Prefer Not to Respond; Prefer Not to Say; Prefer not to answer; Prefer not to disclose; Prefer not to say; Prefer to self-describe; Reassigned; Sex; Transgender; Two or more races; U; UNKNOWN; Undeclared; Undisc; Undisclosed; Unknown; Unspecified; Unused: F; Unused: M; White; Withhold; [NONE]; f; female; m; nan; unknown; unknown '</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("svorwerk/setfit-fine-tuned-demo-class") # Run inference preds = model("Emp_FLSA: E; N; P; V; X; nan") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 2 | 135.7721 | 6076 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 5 | | 1 | 8 | | 2 | 24 | | 3 | 23 | | 4 | 2 | | 5 | 22 | | 6 | 12 | | 7 | 2 | | 8 | 6 | | 9 | 4 | | 10 | 8 | | 11 | 6 | | 12 | 8 | | 14 | 2 | | 16 | 4 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (4, 4) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 5e-06) - head_learning_rate: 0.002 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: True - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - max_length: 500 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:----------:|:-------:|:-------------:|:---------------:| | 0.0019 | 1 | 0.1915 | - | | 0.0973 | 50 | 0.0902 | - | | 0.1946 | 100 | 0.0364 | - | | 0.2918 | 150 | 0.0042 | - | | 0.3891 | 200 | 0.0012 | - | | 0.4864 | 250 | 0.0009 | - | | 0.5837 | 300 | 0.0006 | - | | 0.6809 | 350 | 0.0005 | - | | 0.7782 | 400 | 0.0011 | - | | 0.8755 | 450 | 0.0003 | - | | **0.9728** | **500** | **0.0005** | **0.0956** | | 1.0700 | 550 | 0.0002 | - | | 1.1673 | 600 | 0.0006 | - | | 1.2646 | 650 | 0.0003 | - | | 1.3619 | 700 | 0.0004 | - | | 1.4591 | 750 | 0.0002 | - | | 1.5564 | 800 | 0.0002 | - | | 1.6537 | 850 | 0.0001 | - | | 1.7510 | 900 | 0.0002 | - | | 1.8482 | 950 | 0.0002 | - | | 1.9455 | 1000 | 0.0002 | 0.0994 | | 2.0428 | 1050 | 0.0002 | - | | 2.1401 | 1100 | 0.0003 | - | | 2.2374 | 1150 | 0.0002 | - | | 2.3346 | 1200 | 0.0001 | - | | 2.4319 | 1250 | 0.0001 | - | | 2.5292 | 1300 | 0.0001 | - | | 2.6265 | 1350 | 0.0001 | - | | 2.7237 | 1400 | 0.0001 | - | | 2.8210 | 1450 | 0.0001 | - | | 2.9183 | 1500 | 0.0001 | 0.0975 | | 3.0156 | 1550 | 0.0001 | - | | 3.1128 | 1600 | 0.0001 | - | | 3.2101 | 1650 | 0.0004 | - | | 3.3074 | 1700 | 0.0002 | - | | 3.4047 | 1750 | 0.0001 | - | | 3.5019 | 1800 | 0.0002 | - | | 3.5992 | 1850 | 0.0001 | - | | 3.6965 | 1900 | 0.0001 | - | | 3.7938 | 1950 | 0.0001 | - | | 3.8911 | 2000 | 0.0001 | 0.0987 | | 3.9883 | 2050 | 0.0002 | - | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.37.1 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
huggingtweets/trappychan_
huggingtweets
"2021-05-21T20:52:08Z"
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://www.huggingtweets.com/trappychan_/1621630324306/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1360599607848423439/m43_441C_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">trappy-chan</div> <div style="text-align: center; font-size: 14px;">@trappychan_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from trappy-chan. | Data | trappy-chan | | --- | --- | | Tweets downloaded | 3237 | | Retweets | 784 | | Short tweets | 1006 | | Tweets kept | 1447 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2py33huo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @trappychan_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hjzlpxss) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hjzlpxss/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/trappychan_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
hivex-research/hivex-WFC-PPO-baseline-task-1-pattern-2
hivex-research
"2025-03-20T23:34:06Z"
0
0
hivex
[ "hivex", "tensorboard", "onnx", "hivex-wind-farm-control", "reinforcement-learning", "multi-agent-reinforcement-learning", "arxiv:2501.04180", "model-index", "region:us" ]
reinforcement-learning
"2024-08-29T21:29:00Z"
--- library_name: hivex original_train_name: WindFarmControl_pattern_2_task_1_run_id_0_train tags: - hivex - hivex-wind-farm-control - reinforcement-learning - multi-agent-reinforcement-learning model-index: - name: hivex-WFC-PPO-baseline-task-1-pattern-2 results: - task: type: sub-task name: avoid_damage task-id: 1 pattern-id: 2 dataset: name: hivex-wind-farm-control type: hivex-wind-farm-control metrics: - type: cumulative_reward value: 4817.209194335937 +/- 40.181277476160616 name: Cumulative Reward verified: true - type: avoid_damage_reward value: 4817.198291015625 +/- 42.24024614907985 name: Avoid Damage Reward verified: true - type: individual_performance value: 0.0 +/- 0.0 name: Individual Performance verified: true --- This model serves as the baseline for the **Wind Farm Control** environment, trained and tested on task <code>1</code> with pattern <code>2</code> using the Proximal Policy Optimization (PPO) algorithm.<br> <br> Environment: **Wind Farm Control**<br> Task: <code>1</code><br> Pattern: <code>2</code><br> Algorithm: <code>PPO</code><br> Episode Length: <code>5000</code><br> Training <code>max_steps</code>: <code>8000000</code><br> Testing <code>max_steps</code>: <code>8000000</code><br> <br> Train & Test [Scripts](https://github.com/hivex-research/hivex)<br> Download the [Environment](https://github.com/hivex-research/hivex-environments) [hivex-paper]: https://arxiv.org/abs/2501.04180
LHRuig/manuel3
LHRuig
"2025-02-02T19:54:21Z"
8
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-02-02T19:53:46Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: manuel3 --- # manuel3 <Gallery /> ## Model description manuel3 lora ## Trigger words You should use `manuel3` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/manuel3/tree/main) them in the Files & versions tab.
n0madic/elegantEntropy_v1.2
n0madic
"2023-07-13T09:29:46Z"
3
0
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-13T08:25:55Z"
--- license: creativeml-openrail-m ---
alirzb/S1_M1_R3_ViT_42618486
alirzb
"2024-01-06T14:32:46Z"
4
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-01-06T13:08:20Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S1_M1_R3_ViT_42618486 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S1_M1_R3_ViT_42618486 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0015 - Accuracy: 0.9992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0039 | 1.0 | 379 | 0.0024 | 0.9992 | | 0.0041 | 2.0 | 759 | 0.0049 | 0.9984 | | 0.0001 | 3.0 | 1139 | 0.0029 | 0.9992 | | 0.0 | 4.0 | 1519 | 0.0014 | 0.9992 | | 0.0 | 4.99 | 1895 | 0.0015 | 0.9992 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
zuazo/whisper-tiny-es
zuazo
"2025-04-04T21:24:07Z"
321
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "es", "dataset:mozilla-foundation/common_voice_13_0", "arxiv:2503.23542", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-12-06T10:25:39Z"
--- language: - es license: apache-2.0 base_model: openai/whisper-tiny tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Tiny Spanish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_13_0 es type: mozilla-foundation/common_voice_13_0 config: es split: test args: es metrics: - name: Wer type: wer value: 19.59044631343944 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny Spanish This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_13_0 es dataset. It achieves the following results on the evaluation set: - Loss: 0.4218 - Wer: 19.5904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.75e-05 - train_batch_size: 256 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1801 | 8.0 | 1000 | 0.4318 | 22.1861 | | 0.1627 | 16.01 | 2000 | 0.4218 | 19.5904 | | 0.0691 | 24.01 | 3000 | 0.4374 | 20.7170 | | 0.0124 | 32.01 | 4000 | 0.4635 | 20.0459 | | 0.0129 | 40.02 | 5000 | 0.4568 | 20.4135 | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3 ## Citation If you use these models in your research, please cite: ```bibtex @misc{dezuazo2025whisperlmimprovingasrmodels, title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages}, author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja}, year={2025}, eprint={2503.23542}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.23542}, } ``` Please, check the related paper preprint in [arXiv:2503.23542](https://arxiv.org/abs/2503.23542) for more details. ## Licensing This model is available under the [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0). You are free to use, modify, and distribute this model as long as you credit the original creators.
vierlinglukas/ppo_stickthing
vierlinglukas
"2024-01-27T09:25:50Z"
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2024-01-27T09:25:42Z"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -216.04 +/- 165.88 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'vierlinglukas/ppo_stickthing' 'batch_size': 512 'minibatch_size': 128} ```
mlc-ai/DeepSeek-R1-Distill-Qwen-7B-q0f16-MLC
mlc-ai
"2025-01-21T07:59:28Z"
12
0
mlc-llm
[ "mlc-llm", "web-llm", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "region:us" ]
null
"2025-01-21T00:30:31Z"
--- library_name: mlc-llm base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B tags: - mlc-llm - web-llm --- # DeepSeek-R1-Distill-Qwen-7B-q0f16-MLC This is the [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model in MLC format `q0f16`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/DeepSeek-R1-Distill-Qwen-7B-q0f16-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/DeepSeek-R1-Distill-Qwen-7B-q0f16-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/DeepSeek-R1-Distill-Qwen-7B-q0f16-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
2084Collective/super-saiyan-checkpoint-250
2084Collective
"2025-03-15T00:35:38Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-15T00:35:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devanshb26/llama2-qlora-finetunined-dolly_summarize
devanshb26
"2023-09-29T05:44:30Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-09-29T05:44:22Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
lesso18/aae92d4e-6661-41c1-860c-802b349e4355
lesso18
"2025-03-07T21:22:47Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B", "base_model:adapter:unsloth/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
"2025-03-07T20:22:54Z"
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: aae92d4e-6661-41c1-860c-802b349e4355 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 61a31c21848b7a10_train_data.json ds_type: json format: custom path: /workspace/input_data/61a31c21848b7a10_train_data.json type: field_instruction: text field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso18/aae92d4e-6661-41c1-860c-802b349e4355 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000218 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/61a31c21848b7a10_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 180 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c8b32f92-9529-4b3c-9ab3-0cb3f5be396d wandb_project: 18a wandb_run: your_name wandb_runid: c8b32f92-9529-4b3c-9ab3-0cb3f5be396d warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # aae92d4e-6661-41c1-860c-802b349e4355 This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000218 - train_batch_size: 4 - eval_batch_size: 4 - seed: 180 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 2.5581 | | 1.293 | 0.0480 | 500 | 1.2923 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
John6666/batman-anime-mix-v20-sdxl
John6666
"2024-12-23T06:55:56Z"
98
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "style", "versatility", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-12-09T04:27:45Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - style - versatility - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1019370?modelVersionId=1143048). This model created by [brucewayne123](https://civitai.com/user/brucewayne123).
bbytxt/869f96f8-e300-409f-b599-d02ef9e3311a
bbytxt
"2025-01-15T09:19:40Z"
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "region:us" ]
null
"2025-01-15T09:12:34Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: 869f96f8-e300-409f-b599-d02ef9e3311a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 98491b62f8e94d4a_train_data.json ds_type: json format: custom path: /workspace/input_data/98491b62f8e94d4a_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: bbytxt/869f96f8-e300-409f-b599-d02ef9e3311a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/98491b62f8e94d4a_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f56f5ceb-4fe1-4cbf-9c82-817a7337c5a0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f56f5ceb-4fe1-4cbf-9c82-817a7337c5a0 warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 869f96f8-e300-409f-b599-d02ef9e3311a This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4495 | 0.0034 | 1 | 1.6355 | | 1.3785 | 0.1715 | 50 | 1.3734 | | 1.3106 | 0.3431 | 100 | 1.2660 | | 1.3662 | 0.5146 | 150 | 1.2122 | | 1.2131 | 0.6861 | 200 | 1.2009 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
scvi-tools/tabula-sapiens-small_intestine-scanvi
scvi-tools
"2024-12-08T10:54:31Z"
0
0
scvi-tools
[ "scvi-tools", "tensorboard", "biology", "genomics", "single-cell", "model_cls_name:SCANVI", "scvi_version:1.2.0", "anndata_version:0.11.1", "modality:rna", "tissue:various", "annotated:True", "license:cc-by-4.0", "region:us" ]
null
"2023-03-15T21:38:31Z"
--- library_name: scvi-tools license: cc-by-4.0 tags: - biology - genomics - single-cell - model_cls_name:SCANVI - scvi_version:1.2.0 - anndata_version:0.11.1 - modality:rna - tissue:various - annotated:True --- ScANVI is a variational inference model for single-cell RNA-seq data that can learn an underlying latent space, integrate technical batches and impute dropouts. In addition, to scVI, ScANVI is a semi-supervised model that can leverage labeled data to learn a cell-type classifier in the latent space and afterward predict cell types of new data. The learned low-dimensional latent representation of the data can be used for visualization and clustering. scANVI takes as input a scRNA-seq gene expression matrix with cells and genes as well as a cell-type annotation for a subset of cells. We provide an extensive [user guide](https://docs.scvi-tools.org/en/1.2.0/user_guide/models/scanvi.html). - See our original manuscript for further details of the model: [scANVI manuscript](https://www.embopress.org/doi/full/10.15252/msb.20209620). - See our manuscript on [scvi-hub](https://www.biorxiv.org/content/10.1101/2024.03.01.582887v2) how to leverage pre-trained models. This model can be used for fine tuning on new data using our Arches framework: [Arches tutorial](https://docs.scvi-tools.org/en/1.0.0/tutorials/notebooks/scarches_scvi_tools.html). # Model Description Tabula Sapiens is a benchmark, first-draft human cell atlas of nearly 500,000 cells from 24 organs of 15 normal human subjects. # Metrics We provide here key performance metrics for the uploaded model, if provided by the data uploader. <details> <summary><strong>Coefficient of variation</strong></summary> The cell-wise coefficient of variation summarizes how well variation between different cells is preserved by the generated model expression. Below a squared Pearson correlation coefficient of 0.4 , we would recommend not to use generated data for downstream analysis, while the generated latent space might still be useful for analysis. **Cell-wise Coefficient of Variation**: | Metric | Training Value | Validation Value | |-------------------------|----------------|------------------| | Mean Absolute Error | 1.86 | 2.00 | | Pearson Correlation | 0.81 | 0.78 | | Spearman Correlation | 0.66 | 0.60 | | R² (R-Squared) | 0.50 | 0.43 | The gene-wise coefficient of variation summarizes how well variation between different genes is preserved by the generated model expression. This value is usually quite high. **Gene-wise Coefficient of Variation**: | Metric | Training Value | |-------------------------|----------------| | Mean Absolute Error | 3.56 | | Pearson Correlation | 0.87 | | Spearman Correlation | 0.96 | | R² (R-Squared) | 0.60 | </details> <details> <summary><strong>Differential expression metric</strong></summary> The differential expression metric provides a summary of the differential expression analysis between cell types or input clusters. We provide here the F1-score, Pearson Correlation Coefficient of Log-Foldchanges, Spearman Correlation Coefficient, and Area Under the Precision Recall Curve (AUPRC) for the differential expression analysis using Wilcoxon Rank Sum test for each cell-type. **Differential expression**: | Index | gene_f1 | lfc_mae | lfc_pearson | lfc_spearman | roc_auc | pr_auc | n_cells | | --- | --- | --- | --- | --- | --- | --- | --- | | CD4-positive, alpha-beta T cell | 0.92 | 1.78 | 0.61 | 0.87 | 0.14 | 0.83 | 5257.00 | | CD8-positive, alpha-beta T cell | 0.92 | 2.70 | 0.62 | 0.81 | 0.18 | 0.74 | 2231.00 | | enterocyte of epithelium of small intestine | 0.95 | 2.32 | 0.70 | 0.91 | 0.41 | 0.88 | 1100.00 | | B cell | 0.91 | 3.52 | 0.68 | 0.73 | 0.30 | 0.76 | 441.00 | | plasma cell | 0.87 | 3.22 | 0.69 | 0.78 | 0.23 | 0.86 | 303.00 | | small intestine goblet cell | 0.86 | 3.34 | 0.68 | 0.82 | 0.45 | 0.84 | 300.00 | | paneth cell of epithelium of small intestine | 0.89 | 3.54 | 0.71 | 0.80 | 0.44 | 0.84 | 177.00 | | intestinal tuft cell | 0.88 | 4.05 | 0.66 | 0.69 | 0.39 | 0.79 | 146.00 | | transit amplifying cell of small intestine | 0.82 | 3.67 | 0.69 | 0.77 | 0.41 | 0.82 | 127.00 | | fibroblast | 0.75 | 3.46 | 0.74 | 0.86 | 0.58 | 0.90 | 88.00 | | intestinal crypt stem cell of small intestine | 0.81 | 3.42 | 0.74 | 0.84 | 0.51 | 0.87 | 71.00 | | mast cell | 0.84 | 4.23 | 0.64 | 0.61 | 0.32 | 0.80 | 65.00 | | neutrophil | 0.75 | 3.99 | 0.71 | 0.73 | 0.43 | 0.81 | 64.00 | | monocyte | 0.78 | 4.24 | 0.71 | 0.77 | 0.47 | 0.86 | 61.00 | | gut endothelial cell | 0.61 | 5.45 | 0.63 | 0.64 | 0.39 | 0.79 | 15.00 | | intestinal enteroendocrine cell | 0.56 | 5.15 | 0.62 | 0.59 | 0.34 | 0.69 | 12.00 | </details> # Model Properties We provide here key parameters used to setup and train the model. <details> <summary><strong>Model Parameters</strong></summary> These provide the settings to setup the original model: ```json { "n_hidden": 128, "n_latent": 20, "n_layers": 3, "dropout_rate": 0.05, "dispersion": "gene", "gene_likelihood": "nb", "linear_classifier": false, "latent_distribution": "normal", "use_batch_norm": "none", "use_layer_norm": "both", "encode_covariates": true } ``` </details> <details> <summary><strong>Setup Data Arguments</strong></summary> Arguments passed to setup_anndata of the original model: ```json { "labels_key": "cell_ontology_class", "unlabeled_category": "unknown", "layer": null, "batch_key": "donor_assay", "size_factor_key": null, "categorical_covariate_keys": null, "continuous_covariate_keys": null, "use_minified": false } ``` </details> <details> <summary><strong>Data Registry</strong></summary> Registry elements for AnnData manager: | Registry Key | scvi-tools Location | |-------------------|--------------------------------------| | X | adata.X | | batch | adata.obs['_scvi_batch'] | | labels | adata.obs['_scvi_labels'] | | latent_qzm | adata.obsm['scanvi_latent_qzm'] | | latent_qzv | adata.obsm['scanvi_latent_qzv'] | | minify_type | adata.uns['_scvi_adata_minify_type'] | | observed_lib_size | adata.obs['observed_lib_size'] | - **Data is Minified**: False </details> <details> <summary><strong>Summary Statistics</strong></summary> | Summary Stat Key | Value | |--------------------------|-------| | n_batch | 2 | | n_cells | 10458 | | n_extra_categorical_covs | 0 | | n_extra_continuous_covs | 0 | | n_labels | 17 | | n_latent_qzm | 20 | | n_latent_qzv | 20 | | n_vars | 3000 | </details> <details> <summary><strong>Training</strong></summary> <!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make sure to provide this field if you want users to be able to access your training data. See the scvi-tools documentation for details. --> **Training data url**: Not provided by uploader If provided by the original uploader, for those interested in understanding or replicating the training process, the code is available at the link below. **Training Code URL**: https://github.com/YosefLab/scvi-hub-models/blob/main/src/scvi_hub_models/TS_train_all_tissues.ipynb </details> # References The Tabula Sapiens Consortium. The Tabula Sapiens: A multiple-organ, single-cell transcriptomic atlas of humans. Science, May 2022. doi:10.1126/science.abl4896
Finnish-NLP/t5-large-nl36-finnish
Finnish-NLP
"2022-10-02T15:59:52Z"
10
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "finnish", "t5x", "seq2seq", "fi", "dataset:Finnish-NLP/mc4_fi_cleaned", "dataset:wikipedia", "arxiv:1910.10683", "arxiv:2002.05202", "arxiv:2109.10686", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-04-19T06:06:16Z"
--- language: - fi license: apache-2.0 tags: - finnish - t5 - t5x - seq2seq datasets: - Finnish-NLP/mc4_fi_cleaned - wikipedia inference: false --- # T5-large-nl36 for Finnish Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer). **Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text. ## Model description T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format. Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts. More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language. This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining: - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202) - Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning - Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks - No parameter sharing between embedding and classifier layer This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially. This model uses the [t5-efficient-large-nl36](https://huggingface.co/google/t5-efficient-large-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "large" model's architecture of 24 transformer layers. In total, this model has 1425 million parameters. ## Intended uses & limitations This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example. ### How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") ``` and in TensorFlow: ```python from transformers import T5Tokenizer, TFT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish", from_pt=True) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data This Finnish T5 model was pretrained on the combination of six datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text. ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1.87M steps with a batch size of 32 (in total 31B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-3, and then an inverse square root decay (exponential decay) of the learning rate after. Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere). ## Evaluation results Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. When fine-tuned on those datasets, this model (the seventh row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 | |Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 | |Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 | |Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 | |Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 | |Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** | |Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 | Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |google/mt5-small | 301 million |91.51 |64.10 | |google/mt5-base | 583 million |92.71 |68.40 | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
ArchitaRay/my_awesome_opus_books_model
ArchitaRay
"2022-12-23T08:47:05Z"
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:opus_books", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-12-23T07:16:05Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus_books model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 1.5494 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7524 | 1.0 | 6355 | 1.5629 | | 1.7382 | 2.0 | 12710 | 1.5494 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
sappho192/jesc-ja-en-translator
sappho192
"2024-06-13T15:02:33Z"
267
1
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "encoder-decoder", "text2text-generation", "translation", "ja", "en", "arxiv:1710.10639", "license:mit", "autotrain_compatible", "region:us" ]
translation
"2024-02-12T05:50:05Z"
--- license: mit language: - ja - en pipeline_tag: translation inference: false --- # Japanese to English translator Japanese to English translator model based on [EncoderDecoderModel](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)([bert-japanese](https://huggingface.co/cl-tohoku/bert-base-japanese)+[GPT2](https://huggingface.co/openai-community/gpt2)) # Usage ## Demo Please visit https://huggingface.co/spaces/sappho192/jesc-ja-en-translator-demo ## Dependencies (PyPI) - torch - transformers - fugashi - unidic-lite ## Inference ```Python import transformers import torch encoder_model_name = "cl-tohoku/bert-base-japanese-v2" decoder_model_name = "openai-community/gpt2" src_tokenizer = transformers.BertJapaneseTokenizer.from_pretrained(encoder_model_name) trg_tokenizer = transformers.PreTrainedTokenizerFast.from_pretrained(decoder_model_name) model = transformers.EncoderDecoderModel.from_pretrained("sappho192/jesc-ja-en-translator") def translate(text_src): embeddings = src_tokenizer(text_src, return_attention_mask=False, return_token_type_ids=False, return_tensors='pt') embeddings = {k: v for k, v in embeddings.items()} output = model.generate(**embeddings, max_length=512)[0, 1:-1] text_trg = trg_tokenizer.decode(output.cpu()) return text_trg texts = [ "逃げろ!", # Should be "run!" "初めまして.", # "nice to meet you." "よろしくお願いします.", # "thank you." "夜になりました", # "and then it got dark." "ご飯を食べましょう." # "let's eat." ] for text in texts: print(translate(text)) print() ``` # Dataset The dataset used to train the model is JESC(Japanese-English Subtitle Corpus). Its license is [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/). All data information can be accessed through following links: - Dataset link: https://nlp.stanford.edu/projects/jesc/ - Paper link: https://arxiv.org/abs/1710.10639 - Github link: https://github.com/rpryzant/JESC - Bibtex: ```bibtex @ARTICLE{pryzant_jesc_2017, author = {{Pryzant}, R. and {Chung}, Y. and {Jurafsky}, D. and {Britz}, D.}, title = "{JESC: Japanese-English Subtitle Corpus}", journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1710.10639}, keywords = {Computer Science - Computation and Language}, year = 2017, month = oct, } ```
robiual-awal/50b250e1-f266-48db-a73b-76db8b1173b1
robiual-awal
"2025-02-22T23:24:01Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct", "base_model:adapter:unsloth/llama-3-8b-Instruct", "license:llama3", "region:us" ]
null
"2025-02-22T20:10:40Z"
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 50b250e1-f266-48db-a73b-76db8b1173b1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 50b250e1-f266-48db-a73b-76db8b1173b1 This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Hossein-Bodaghi/CA_Market
Hossein-Bodaghi
"2022-12-11T18:05:14Z"
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2022-12-11T18:05:12Z"
--- license: cc-by-nc-sa-4.0 ---
RichardErkhov/openai-community_-_gpt2-8bits
RichardErkhov
"2024-05-01T22:06:24Z"
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-04-17T09:18:47Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2 - bnb 8bits - Model creator: https://huggingface.co/openai-community/ - Original model: https://huggingface.co/openai-community/gpt2/ Original model description: --- language: en tags: - exbert license: mit --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. This is the **smallest** version of GPT-2, with 124M parameters. **Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl) ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
TheBloke/LoKuS-13B-GGML
TheBloke
"2023-09-27T13:02:22Z"
5
1
transformers
[ "transformers", "llama", "base_model:JoSw-14/LoKuS-13B", "base_model:finetune:JoSw-14/LoKuS-13B", "license:llama2", "region:us" ]
null
"2023-08-31T16:50:46Z"
--- license: llama2 model_name: LoKuS 13B inference: false model_creator: Jonathan Swarp model_link: https://huggingface.co/JoSw-14/LoKuS-13B model_type: llama quantized_by: TheBloke base_model: JoSw-14/LoKuS-13B --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # LoKuS 13B - GGML - Model creator: [Jonathan Swarp](https://huggingface.co/JoSw-14) - Original model: [LoKuS 13B](https://huggingface.co/JoSw-14/LoKuS-13B) ## Description This repo contains GGML format model files for [Jonathan Swarp's LoKuS 13B](https://huggingface.co/JoSw-14/LoKuS-13B). ### Important note regarding GGML files. The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support. Please use the GGUF models instead. ### About GGML GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LoKuS-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LoKuS-13B-GGUF) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/LoKuS-13B-GGML) * [Jonathan Swarp's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/JoSw-14/LoKuS-13B) ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023. For support with latest llama.cpp, please use GGUF files instead. The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future. ## Explanation of the new k-quant methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [lokus-13b.ggmlv3.Q2_K.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q2_K.bin) | Q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | [lokus-13b.ggmlv3.Q3_K_S.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q3_K_S.bin) | Q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | [lokus-13b.ggmlv3.Q3_K_M.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q3_K_M.bin) | Q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [lokus-13b.ggmlv3.Q3_K_L.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q3_K_L.bin) | Q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [lokus-13b.ggmlv3.Q4_0.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q4_0.bin) | Q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. | | [lokus-13b.ggmlv3.Q4_K_S.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q4_K_S.bin) | Q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | [lokus-13b.ggmlv3.Q4_K_M.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q4_K_M.bin) | Q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | [lokus-13b.ggmlv3.Q4_1.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q4_1.bin) | Q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | [lokus-13b.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | [lokus-13b.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | [lokus-13b.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | [lokus-13b.ggmlv3.Q5_1.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q5_1.bin) | Q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | [lokus-13b.ggmlv3.Q6_K.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q6_K.bin) | Q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization | | [lokus-13b.ggmlv3.Q8_0.bin](https://huggingface.co/TheBloke/LoKuS-13B-GGML/blob/main/lokus-13b.ggmlv3.Q8_0.bin) | Q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier. For compatibility with latest llama.cpp, please use GGUF files instead. ``` ./main -t 10 -ngl 32 -m lokus-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWrite a story about llamas\n\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Jonathan Swarp's LoKuS 13B
mirella-guenther/openai-whisper-large-torgo
mirella-guenther
"2024-05-24T11:18:27Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-05-24T11:18:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ensec/Llama3-70B-EnSecAI-Ru-Chat
ensec
"2024-08-28T13:07:42Z"
15
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ru", "en", "base_model:meta-llama/Llama-3.1-70B", "base_model:finetune:meta-llama/Llama-3.1-70B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-05T12:04:05Z"
--- license: llama3 language: - ru - en base_model: meta-llama/Meta-Llama-3.1-70B library_name: transformers --- ## Model Information This model was fine-tuned in Russian using publicly available SFT and DPO dataset and achieves superior performance in understanding, generating, and interacting in Russian compared to original Llama 3 70B Instruct. ## Benchmarks Llama3-70B-EnSecAI-Ru-Chat outperforms the original Llama-3-70B-Instruct on the MERA benchmark by 2.4% (57.0 vs 54.6). Moreover, it outperforms all other Russian-language models present in this benchmark. <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> <td><strong>Metric</strong> </td> <td><strong>MTS AI Chat Medium</strong> </td> <td><strong>GigaChat-Pro</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3 70B EnSecAI</strong> </tr> <tr> <td rowspan="3" >World Knowledge </td> <td>CheGeKa <td>F1 / EM </td> <td>0.05 / 0.022 </td> <td>0.104 / 0 </td> <td>0.071 / 0 </td> <td> <strong>0.305 / 0.231</strong> </tr> <tr> <td>ruOpenBookQA <td>Avg. F1 / acc </td> <td>0.813 / 0.813 </td> <td>0.873 / 0.872 </td> <td>0.939 / 0.94 </td> <td>0.932 / 0.932 </tr> <tr> <td>ruWorldTree <td>Avg. F1 / acc </td> <td>0.872 / 0.872 </td> <td>0.939 / 0.939 </td> <td>0.973 / 0.973 </td> <td>0.986 / 0.986 </tr> <tr> <td rowspan="4" >Reasoning </td> <td>MultiQ <td>F1-score/EM </td> <td>0.247 / 0.171 </td> <td>0.369 / 0.247 </td> <td>0.566 / 0.414 </td> <td>0.541 / 0.421 </tr> <tr> <td>ruMMLU <td>acc </td> <td>0.704 </td> <td>0.816 </td> <td>0.856 </td> <td>0.847 </tr> <tr> <td>ruTiE <td>acc </td> <td>0.674 </td> <td>0.791 </td> <td>0.8 </td> <td>0.827 </tr> <tr> <td>RWSD <td>acc </td> <td>0.665 </td> <td>0.585 </td> <td>0.623 </td> <td>0.676 </tr> <tr> <td>Common Sense </td> <td>PARus <td>acc </td> <td>0.884 </td> <td>0.884 </td> <td>0.918 </td> <td>0.926 </td> </tr> <tr> <td rowspan="4" >Math </td> <td>MathLogicQA <td>acc </td> <td>0.589 </td> <td>0.467 </td> <td>0.581 </td> <td>0.571 </tr> <tr> <td>ruModAr <td>acc </td> <td>0.949 </td> <td>0.866 </td> <td>0.736 </td> <td>0.708 </tr> <tr> <td>ruMultiAr <td>acc </td> <td>0.337 </td> <td>0.273 </td> <td>0.349 </td> <td>0.357 </td> </tr> <tr> <td>SimpleAr <td>acc </td> <td>0.986 </td> <td>0.971 </td> <td>0.999 </td> <td>0.997 </td> </tr> <tr> <td>Code </td> <td>ruHumanEval <td>pass@1 / pass@5 / pass@10 </td> <td>0.023 / 0.113 / 0.226 </td> <td>0.013 / 0.064 / 0.128 </td> <td>0.039 / 0.198 / 0.396 </td> <td><strong>0.048 / 0.243 / 0.487</strong> </td> </tr> <tr> <td>Average </td> <td>MERA <td> </td> <td>0.536 </td> <td>0.537 </td> <td>0.546 </td> <td><strong>0.57</strong> </td> </tr> </table> It is particularly worth noting the significant progress on the CheGeKa dataset. Although the data we used for further training did not include examples from this or similar games, we observed more than a 4x increase in the F1 score (0.071 -> 0.305). Additionally, the EM (Exact Match) metric improved from 0 to 0.231. On this dataset, our model significantly outperforms other Russian-language models as well. We have also significantly reduced the number of switches from Russian to English. In our internal benchmark, the original model switched 32% of the time, while the current model does so only 0.7% of the time. ## Example Usage ```python import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = 'ensec/Llama3-70B-EnSecAI-Ru-Chat' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() instruction = "Каковы основные причины начала Второй мировой войны?" messages = [ {"role": "user", "content": f"{instruction}"} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=2048, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9 ) print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) ``` ``` Вторая мировая война началась из-за сложного взаимодействия политических, экономических, идеологических и военных факторов. Некоторые из основных причин включают: 1. Агрессивная политика нацистской Германии и фашистской Италии: Обе страны, возглавляемые Адольфом Гитлером и Бенито Муссолини соответственно, хотели расширить свои территории и установить идеологическое господство. Они начали вторгаться в соседние страны и нарушать международные соглашения. 2. Версальский мирный договор: Версальский мирный договор, подписанный после Первой мировой войны, был чрезвычайно строгим по отношению к Германии. Это привело к экономическим трудностям, росту национализма и антипатию к международному сообществу, что в конечном итоге способствовало подъему нацистов. 3. Экономический кризис: Глобальный экономический кризис, известный как Великая депрессия, начавшийся в 1929 году, привел к массовой безработице и экономическим трудностям во многих странах. Это создало благоприятную среду для экстремистских движений и авторитарных режимов. 4. Агрессия Японии: Япония начала расширяться на Дальний Восток, вторгаясь в Маньчжурию, Корею и Китай. Это привело к конфликтам с западными странами, которые пытались поддержать политику не вмешательства. 5. Ослабление Лиги Наций: Лига Наций была международной организацией, созданной после Первой мировой войны для поддержания мира и предотвращения войны. Однако из-за отсутствия военной власти и неэффективности в принятии решений Лига Наций не смогла предотвратить агрессию Германии, Италии и Японии. 6. Политика умиротворения: Многие страны, включая Великобританию и Францию, пытались умиротворить нацистскую Германию, надеясь на то, что Гитлер будет удовлетворен и прекратит агрессию. Однако это только поощряло его к дальнейшим захватам. 7. Идеологические различия: Нацизм, фашизм и коммунизм были идеологиями, которые противостояли либеральной демократии и капитализму. Это привело к идеологическим конфликтам и политическим разногласиям, которые в конечном итоге привели к войне. Все эти факторы взаимодействовали и усугублялись, что в конечном итоге привело к началу Второй мировой войны. ``` ### Citation If you use this model in your work, please cite it as follows: ``` @misc {kostyumov_llama70b_ensec_2024, author = { {Vasily Kostyumov, Bulat Nutfullin, Oleg Pilipenko} }, title = { Llama3-70B-EnSecAI-Ru-Chat }, year = 2024, url = { https://huggingface.co/ensec/Llama3-70B-EnSecAI-Ru-Chat }, publisher = { Hugging Face } } ``` ### Contact For further questions or issues, please reach out at [email protected] or open an issue on the Hugging Face model page.
jichuanh/Taxi-V3
jichuanh
"2025-03-25T23:26:39Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2025-03-25T23:26:37Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-V3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jichuanh/Taxi-V3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
krytonguard/bert-base-cased-10M
krytonguard
"2024-11-08T06:47:42Z"
9
0
transformers
[ "transformers", "safetensors", "bert", "text-generation", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-08T05:16:39Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-10M results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-10M This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.0074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.9986 | 1.0 | 3062 | 7.0212 | | 6.9955 | 2.0 | 6124 | 7.0113 | | 6.9883 | 3.0 | 9186 | 7.0074 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
AmineAmira/mistralai-7B-v01-based-finetuned-using-ludwig-with-samsum-T4-sharded-4bit-notmerged
AmineAmira
"2023-11-13T09:28:37Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:alexsherstinsky/Mistral-7B-v0.1-sharded", "base_model:adapter:alexsherstinsky/Mistral-7B-v0.1-sharded", "region:us" ]
null
"2023-11-13T09:28:35Z"
--- library_name: peft base_model: alexsherstinsky/Mistral-7B-v0.1-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2.dev0
RayneAmes/blastoise_v3
RayneAmes
"2025-02-12T19:52:41Z"
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-02-12T19:49:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tyfeng1997/Llama3.2-1B-Open-R1-Distill
tyfeng1997
"2025-02-09T23:17:36Z"
61
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:HuggingFaceH4/Bespoke-Stratos-17k", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-09T02:56:32Z"
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: HuggingFaceH4/Bespoke-Stratos-17k library_name: transformers model_name: Llama3.2-1B-Open-R1-Distill tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Llama3.2-1B-Open-R1-Distill This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "tyfeng1997/Llama3.2-1B-Open-R1-Distill" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) # Prepare the messages messages = [ { "role": "system", "content": """Your role as an assistant involves thoroughly exploring questions through a systematic long thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution. In the Thought section, detail your reasoning process using the specified format: <|begin_of_thought|> {thought with steps separated with '\n\n'} <|end_of_thought|> Each step should include detailed considerations such as analisying questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The solution should remain a logical, accurate, concise expression style and detail necessary step needed to reach the conclusion, formatted as follows: <|begin_of_solution|> {final formatted, precise, and clear solution} <|end_of_solution|> Now, try to solve the following question through the above guidelines:""" }, { "role": "user", "content": """A regular hexagon can be divided into six equilateral triangles. If the perimeter of one of the triangles is 21 inches, what is the perimeter, in inches, of the regular hexagon?""" } ] # Apply chat template prompt = tokenizer.apply_chat_template(messages, tokenize=False) # Tokenize the prompt inputs = tokenizer(prompt, return_tensors="pt").to(model.device) # Generate outputs = model.generate( inputs.input_ids, max_new_tokens=10000, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode and print the response response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ```text <|begin_of_thought|> Okay, so I need to figure out the perimeter of a regular hexagon that can be divided into six equilateral triangles. The perimeter of one of these triangles is given as 21 inches. Hmm, let me think. First, I remember that a regular hexagon can be divided into six equilateral triangles. That's a good starting point. If I can find the perimeter of one of these triangles, I can multiply that by six to get the perimeter of the hexagon. But how do I find the perimeter of an equilateral triangle? I know that the perimeter of an equilateral triangle is three times the length of one of its sides. Since the perimeter of the triangle is 21 inches, that means each side is 21 divided by three, which is 7 inches. So, each side of the equilateral triangle is 7 inches. Now, since the hexagon is made up of six such equilateral triangles, if each triangle has a side of 7 inches, then the perimeter of the hexagon would be six times that. So, 6 * 7 = 42 inches. That seems straightforward. But let me double-check my reasoning to make sure I didn't make any mistakes. If I divide the hexagon into six equilateral triangles, each triangle has a side of 7 inches. The perimeter of the triangle is 21 inches. To find the side length, I divided 21 by 3, which gives 7. Then, the perimeter of the hexagon is 6 times that, which is 42. Yep, that seems correct. Wait, but just to make sure I didn't miss anything. Let me visualize the hexagon and the six equilateral triangles. If each triangle has a side of 7 inches, then the perimeter of the hexagon would be the sum of the perimeters of all six triangles. Each triangle is an equilateral triangle, so each has three sides. So, the perimeter of one triangle is 21 inches, so the perimeter of one side of the triangle is 21 / 3 = 7 inches. Then, the perimeter of the hexagon is 6 times that, which is 6 * 7 = 42 inches. That seems correct. I think that's solid. No mistakes here. So, the perimeter of the regular hexagon is 42 inches. That makes sense because if you divide the hexagon into six equilateral triangles, each triangle's perimeter is 21 inches, so the total perimeter of the hexagon is 6 times that, which is 42 inches. <|end_of_thought|> <|begin_of_solution|> The perimeter of a regular hexagon that can be divided into six equilateral triangles is calculated by multiplying the perimeter of one equilateral triangle by six. 1. **Perimeter of an equilateral triangle**: The perimeter of an equilateral triangle is three times the length of one side. Given the perimeter of the triangle is 21 inches, each side is \(\frac{21}{3} = 7\) inches. 2. **Perimeter of the hexagon**: Since the hexagon is made up of six equilateral triangles, the perimeter of the hexagon is \(6 \times 7 = 42\) inches. Thus, the perimeter of the regular hexagon is \(\boxed{42}\) inches. <|end_of_solution|> ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bofeng1997-ty/huggingface/runs/4gwo6izi) This model was trained with SFT. ### Framework versions - TRL: 0.15.0.dev0 - Transformers: 4.49.0.dev0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ### Note ***This model was only trained for 1 epoch, so the MATH results are not good.*** ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ThePianist/poca-SoccerTwos
ThePianist
"2023-03-24T04:08:23Z"
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
"2023-03-24T04:08:16Z"
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: ThePianist/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mradermacher/MobileLLM-1.5B-GGUF
mradermacher
"2025-01-14T09:03:03Z"
260
0
transformers
[ "transformers", "gguf", "en", "base_model:facebook/MobileLLM-1.5B", "base_model:quantized:facebook/MobileLLM-1.5B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2025-01-12T03:03:49Z"
--- base_model: facebook/MobileLLM-1.5B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/facebook/MobileLLM-1.5B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MobileLLM-1.5B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q2_K.gguf) | Q2_K | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q6_K.gguf) | Q6_K | 1.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MobileLLM-1.5B-GGUF/resolve/main/MobileLLM-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_4bit
Agnuxo
"2024-08-27T10:27:02Z"
16
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit", "base_model:quantized:Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-08-26T21:17:34Z"
--- base_model: Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf --- # Uploaded model - **Developed by:** Agnuxo - **License:** apache-2.0 - **Finetuned from model :** Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## How the MOE System Works This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality: 1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use. 2. **Expert Routing:** When a user asks a question, the system either: - Uses keyword matching to identify the relevant domain. - Consults the director LLM to classify the question's category. 3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert. 4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer. 5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system. This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM. Repository and Additional Information Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py GitHub Repository: https://github.com/Agnuxo1/NEBULA ## Code Example The following code demonstrates the implementation of the Multi-Expert Question Answering System: ```python import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline MODEL_CONFIG = { "director": { "name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit", "task": "text-generation", }, "programming": { "name": "Qwen/Qwen2-1.5B-Instruct", "task": "text-generation", }, "biology": { "name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit", "task": "text-generation", }, "mathematics": { "name": "Qwen/Qwen2-Math-1.5B-Instruct", "task": "text-generation", } } KEYWORDS = { "biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "célula", "ADN", "proteína", "evolución", "genética", "ecosistema", "organismo", "metabolismo", "fotosíntesis", "microbiología"], "mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuación", "integral", "derivada", "función", "geometría", "álgebra", "estadística", "probabilidad"], "programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"] } class MOELLM: def __init__(self): self.current_expert = None self.current_model = None self.current_tokenizer = None self.device = "cuda" if torch.cuda.is_available() else "cpu" print(f"Using device: {self.device}") self.load_director_model() def load_director_model(self): """Loads the director model.""" print("Loading director model...") model_name = MODEL_CONFIG["director"]["name"] self.director_tokenizer = AutoTokenizer.from_pretrained(model_name) self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device) self.director_pipeline = pipeline( MODEL_CONFIG["director"]["task"], model=self.director_model, tokenizer=self.director_tokenizer, device=self.device ) print("Director model loaded.") def load_expert_model(self, expert): """Dynamically loads an expert model, releasing memory from the previous model.""" if expert not in MODEL_CONFIG: raise ValueError(f"Unknown expert: {expert}") if self.current_expert != expert: print(f"Loading expert model: {expert}...") # Free memory from the current model if it exists if self.current_model: del self.current_model del self.current_tokenizer torch.cuda.empty_cache() model_config = MODEL_CONFIG[expert] self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"]) self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device) self.current_expert = expert print(f"{expert.capitalize()} model loaded.") return pipeline( MODEL_CONFIG[expert]["task"], model=self.current_model, tokenizer=self.current_tokenizer, device=self.device ) def determine_expert_by_keywords(self, question): """Determines the expert based on keywords in the question.""" question_lower = question.lower() for expert, keywords in KEYWORDS.items(): if any(keyword in question_lower for keyword in keywords): return expert return None def determine_expert(self, question): """Determines which expert should answer the question.""" expert = self.determine_expert_by_keywords(question) if expert: print(f"Expert determined by keyword: {expert}") return expert prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:" response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text'] expert = response.split(":")[-1].strip().lower() if expert not in MODEL_CONFIG: expert = "director" print(f"Redirecting question to: {expert}") return expert def generate_response(self, question, expert): """Generates a response using the appropriate model.""" try: model = self.load_expert_model(expert) prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:" response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text'] return response.split("Answer:")[-1].strip() except Exception as e: print(f"Error generating response: {str(e)}") return "Sorry, there was an error processing your request. Please try again." def chat_interface(self): """Simple chat interface.""" print("Welcome to the MOE-LLM chat. Type 'exit' to quit.") while True: question = input("\nYou: ") if question.lower() in ['exit', 'quit']: break try: expert = self.determine_expert(question) response = self.generate_response(question, expert) print(f"\n{expert.capitalize()}: {response}") except Exception as e: print(f"Error in chat: {str(e)}") print("Please try asking another question.") if __name__ == "__main__": moe_llm = MOELLM() moe_llm.chat_interface()
nadejdatarabukina/6c4d1ddb-9157-4ff9-8aaa-81af4247c16b
nadejdatarabukina
"2025-01-15T05:01:26Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "region:us" ]
null
"2025-01-15T03:03:25Z"
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-6.7b-instruct tags: - axolotl - generated_from_trainer model-index: - name: 6c4d1ddb-9157-4ff9-8aaa-81af4247c16b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: deepseek-ai/deepseek-coder-6.7b-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - cf42227dca647ec1_train_data.json ds_type: json format: custom path: /workspace/input_data/cf42227dca647ec1_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nadejdatarabukina/6c4d1ddb-9157-4ff9-8aaa-81af4247c16b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/cf42227dca647ec1_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 106b5945-9545-4fb6-abeb-880146cca473 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 106b5945-9545-4fb6-abeb-880146cca473 warmup_steps: 10 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6c4d1ddb-9157-4ff9-8aaa-81af4247c16b This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | nan | | 0.0 | 0.0002 | 7 | nan | | 0.0 | 0.0004 | 14 | nan | | 0.0 | 0.0006 | 21 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF
bartowski
"2024-06-24T09:47:10Z"
1,147
23
null
[ "gguf", "generated_from_trainer", "axolotl", "text-generation", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
"2024-06-24T07:03:50Z"
--- license: apache-2.0 base_model: 01-ai/Yi-1.5-34B-32k tags: - generated_from_trainer - axolotl datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of dolphin-2.9.3-Yi-1.5-34B-32k Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization. Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|im_start|> system {system_prompt}<|im_end|> <|im_start|> user {prompt}<|im_end|> <|im_start|> assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q8_1.gguf) | Q8_0_L | 37.40GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0.gguf) | Q8_0 | 36.54GB | Extremely high quality, generally unneeded but max available quant. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K_L.gguf) | Q6_K_L | 29.29GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K.gguf) | Q6_K | 28.21GB | Very high quality, near perfect, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_L.gguf) | Q5_K_L | 25.46GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_M.gguf) | Q5_K_M | 24.32GB | High quality, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_S.gguf) | Q5_K_S | 23.70GB | High quality, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_L.gguf) | Q4_K_L | 21.85GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_M.gguf) | Q4_K_M | 20.65GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_S.gguf) | Q4_K_S | 19.59GB | Slightly lower quality with more space savings, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ4_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ4_XS.gguf) | IQ4_XS | 18.47GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_XL.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF//main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_XL.gguf) | Q3_K_XL | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_L.gguf) | Q3_K_L | 18.13GB | Lower quality but usable, good for low RAM availability. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_M.gguf) | Q3_K_M | 16.65GB | Even lower quality. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_M.gguf) | IQ3_M | 15.56GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_S.gguf) | Q3_K_S | 14.96GB | Low quality, not recommended. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XS.gguf) | IQ3_XS | 14.23GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XXS.gguf) | IQ3_XXS | 13.33GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [dolphin-2.9.3-Yi-1.5-34B-32k-Q2_K.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q2_K.gguf) | Q2_K | 12.82GB | Very low quality but surprisingly usable. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_M.gguf) | IQ2_M | 11.79GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_S.gguf) | IQ2_S | 10.89GB | Very low quality, uses SOTA techniques to be usable. | | [dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_XS.gguf) | IQ2_XS | 10.30GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF --include "dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF --include "dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0.gguf/*" --local-dir dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0 ``` You can either specify a new local-dir (dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
sail-rvc/ViplajoRVC
sail-rvc
"2023-07-14T07:33:52Z"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:33:37Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # ViplajoRVC ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:33:52 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
jq/whisper-large-v2-salt-plus-xog-myx-kin-swa
jq
"2024-11-29T20:49:17Z"
9
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-11-28T16:48:59Z"
--- library_name: transformers tags: [] --- # Whisper large for Ugandan languages Available languages: | ISO 639-3 | Language | | --- | --- | | eng | English (Ugandan accent) | lug | Luganda | | ach | Acholi | | lgg | Lugbara | | teo | Ateso | | nyn | Runyankole | | xog | Lusoga | | myx | Lumasaba | | swa | Swahili | | kin | Kinyarwanda | The model is used in a similar way to the base Whisper model. The model will attempt to auto-detect the language and provide a transcription. However, note that language detection is not always accurate and results may be improved by specifying it instead. The languages in this model are not supported by the base Whisper model, so the format is slightly different: ```python !git clone https://github.com/jqug/salt.git import salt.constants import transformers import datasets import torch processor = transformers.WhisperProcessor.from_pretrained( "jq/whisper-large-v2-salt-plus-xog-myx-kin-swa") model = transformers.WhisperForConditionalGeneration.from_pretrained( "jq/whisper-large-v2-salt-plus-xog-myx-kin-swa") # Get some test audio ds = datasets.load_dataset('Sunbird/salt', 'multispeaker-lug', split='test') audio = ds[0]['audio'] sample_rate = ds[0]['sample_rate'] # Specify a language from: eng, lug, ach, teo, lgg, nyn, myx, xog, swa, kin. lang = 'lug' # Apply the model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") input_features = processor( audio, sampling_rate=sample_rate, return_tensors="pt").input_features input_features = input_features.to(device) predicted_ids = model.to(device).generate( input_features, # Optionally set language=None here instead to auto-detect. language=processor.tokenizer.decode( salt.constants.SALT_LANGUAGE_TOKENS_WHISPER[lang]), forced_decoder_ids=None) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(transcription) # Ekikoola kya kasooli kya kyenvu wabula langi yaakyo etera okuba eya kitaka wansi. ```
saharad/Texi-v3
saharad
"2023-07-25T10:07:07Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-07-25T10:07:05Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Texi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.38 +/- 2.79 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="saharad/Texi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PrunaAI/resnext101_32x8d.tv2_in1k-turbo-green-smashed
PrunaAI
"2024-11-13T13:20:49Z"
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
"2024-03-14T09:40:27Z"
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnext101_32x8d.tv2_in1k-turbo-green-smashed huggingface-cli download PrunaAI/resnext101_32x8d.tv2_in1k-turbo-green-smashed --local-dir resnext101_32x8d.tv2_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnext101_32x8d.tv2_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnext101_32x8d.tv2_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnext101_32x8d.tv2_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
datek/Qwen-Qwen1.5-0.5B-1718151017
datek
"2024-06-12T00:10:20Z"
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-12T00:10:18Z"
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Designathan/llama
Designathan
"2024-06-11T20:26:54Z"
4
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "unsloth", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
"2024-06-11T19:59:48Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stanford-oval/Llama-2-7b-WikiChat
stanford-oval
"2024-08-23T22:20:27Z"
1,446
25
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2305.14292", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-09T00:01:01Z"
--- license: llama2 language: - en --- This model is a fine-tuned LLaMA-2 (7B) model. Please accept the [LLaMA-2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) before downloading this model. This model works with WikiChat v1.0. Refer to the following for more information: GitHub repository: https://github.com/stanford-oval/WikiChat Paper: [WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia](https://aclanthology.org/2023.findings-emnlp.157/) <p align="center"> <img src="https://github.com/stanford-oval/WikiChat/blob/main/public/logo_light.png?raw=true" width="100px" alt="WikiChat Logo" /> <h1 align="center"> <b>WikiChat</b> <br> <a href="https://arxiv.org/abs/2305.14292"> <img src="https://img.shields.io/badge/cs.CL-2305.14292-b31b1b" alt="arXiv"> </a> <a href="https://github.com/stanford-oval/WikiChat/stargazers"> <img src="https://img.shields.io/github/stars/stanford-oval/WikiChat?style=social" alt="Github Stars"> </a> </h1> </p> <p align="center"> Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia </p> <p align="center"> Online demo: <a href="https://wikichat.genie.stanford.edu" target="_blank"> https://wikichat.genie.stanford.edu </a> <br> </p> <p align="center"> <img src="https://raw.githubusercontent.com/stanford-oval/WikiChat/ee25ff7d355c8fbb1321489e1e955be8ae068367/public/pipeline.svg" width="700px" alt="WikiChat Pipeline" /> </p>
Fredbeijixiong/q-FrozenLake-v1-4x4-noSlippery
Fredbeijixiong
"2024-04-09T05:23:32Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-04-08T05:26:12Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Fredbeijixiong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Realgon/bert_sst2_padding0model
Realgon
"2023-11-28T16:53:43Z"
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-28T16:41:42Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert_sst2_padding0model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_sst2_padding0model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7626 - Accuracy: 0.9160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 433 | 0.2204 | 0.9116 | | 0.3073 | 2.0 | 866 | 0.2837 | 0.9105 | | 0.1486 | 3.0 | 1299 | 0.3761 | 0.9165 | | 0.0614 | 4.0 | 1732 | 0.4230 | 0.9259 | | 0.0267 | 5.0 | 2165 | 0.7280 | 0.8962 | | 0.0123 | 6.0 | 2598 | 0.5046 | 0.9215 | | 0.0107 | 7.0 | 3031 | 0.6166 | 0.9182 | | 0.0107 | 8.0 | 3464 | 0.6515 | 0.9149 | | 0.0126 | 9.0 | 3897 | 0.5875 | 0.9182 | | 0.0092 | 10.0 | 4330 | 0.7368 | 0.9110 | | 0.009 | 11.0 | 4763 | 0.6714 | 0.9072 | | 0.0104 | 12.0 | 5196 | 0.5756 | 0.9193 | | 0.0076 | 13.0 | 5629 | 0.6955 | 0.9099 | | 0.0034 | 14.0 | 6062 | 0.6715 | 0.9182 | | 0.0034 | 15.0 | 6495 | 0.7194 | 0.9209 | | 0.0005 | 16.0 | 6928 | 0.7723 | 0.9154 | | 0.0031 | 17.0 | 7361 | 0.7843 | 0.9121 | | 0.0016 | 18.0 | 7794 | 0.7563 | 0.9176 | | 0.0 | 19.0 | 8227 | 0.7569 | 0.9198 | | 0.0016 | 20.0 | 8660 | 0.7626 | 0.9160 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
shrenikb/fullfedtest
shrenikb
"2023-11-20T08:04:24Z"
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:chavinlo/alpaca-native", "base_model:adapter:chavinlo/alpaca-native", "region:us" ]
null
"2023-11-20T08:04:23Z"
--- library_name: peft base_model: chavinlo/alpaca-native --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.2
dilovancelik/snowflake-arctic-embed-l-v2.0_qunatized
dilovancelik
"2025-02-07T13:25:16Z"
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
feature-extraction
"2025-02-07T13:24:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
corenet-community/coreml-OpenELM-1_1B
corenet-community
"2024-04-30T10:36:24Z"
2
1
null
[ "coreml", "license:other", "region:us" ]
null
"2024-04-30T09:48:18Z"
--- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM – Core ML This repository contains a Core ML conversion of [apple/OpenELM-1_1B](https://hf.co/apple/OpenELM-1_1B) with the following characteristics: - Sequence length: 128, fixed. - Precision: float32. Please, check the [original model card](https://hf.co/apple/OpenELM-1_1B) for additional details on the model.
alperiox/smolvlm-instruct-trl-sft-latexocr
alperiox
"2025-04-01T03:23:22Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:HuggingFaceTB/SmolVLM-256M-Instruct", "base_model:finetune:HuggingFaceTB/SmolVLM-256M-Instruct", "endpoints_compatible", "region:us" ]
null
"2025-04-01T02:44:55Z"
--- base_model: HuggingFaceTB/SmolVLM-256M-Instruct library_name: transformers model_name: smolvlm-instruct-trl-sft-latexocr tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for smolvlm-instruct-trl-sft-latexocr This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alperiox/smolvlm-instruct-trl-sft-latexocr", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.16.0 - Transformers: 4.50.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Zion_Alpha-i1-GGUF
mradermacher
"2025-03-10T06:07:31Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:SicariusSicariiStuff/Zion_Alpha", "base_model:quantized:SicariusSicariiStuff/Zion_Alpha", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
"2025-03-10T05:01:59Z"
--- base_model: SicariusSicariiStuff/Zion_Alpha language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SicariusSicariiStuff/Zion_Alpha <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Zion_Alpha-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha-i1-GGUF/resolve/main/Zion_Alpha.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
trieudemo11/llama_7b_attrb_cate_4m_19
trieudemo11
"2023-09-27T00:02:04Z"
1
0
peft
[ "peft", "region:us" ]
null
"2023-09-27T00:01:47Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0
AhmedTaha012/finetuned-ner-hadith-fistsubisa1
AhmedTaha012
"2023-02-18T11:40:29Z"
4
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-02-18T03:54:03Z"
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: finetuned-ner-hadith-fistsubisa1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-ner-hadith-fistsubisa1 This model is a fine-tuned version of [hatmimoha/arabic-ner](https://huggingface.co/hatmimoha/arabic-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4707 - Precision: 0.3641 - Recall: 0.7 - F1: 0.4790 - Accuracy: 0.8111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 312 | 0.4445 | 0.3683 | 0.6555 | 0.4716 | 0.8105 | | 0.5169 | 2.0 | 624 | 0.4690 | 0.3528 | 0.6956 | 0.4682 | 0.8076 | | 0.5169 | 3.0 | 936 | 0.4314 | 0.3405 | 0.6781 | 0.4534 | 0.8099 | | 0.3852 | 4.0 | 1248 | 0.4360 | 0.3509 | 0.6774 | 0.4623 | 0.8132 | | 0.3269 | 5.0 | 1560 | 0.4707 | 0.3641 | 0.7 | 0.4790 | 0.8111 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.1.0 - Tokenizers 0.12.1
lokaspire/mistralai-Finetune-bf16
lokaspire
"2024-02-21T06:10:58Z"
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:Mihir1108/json_data", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-20T06:18:56Z"
--- library_name: transformers datasets: - Mihir1108/json_data --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZzRanger/bert-base-uncased-finetuned-swag
ZzRanger
"2023-12-01T00:36:21Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
"2023-11-30T23:48:14Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0576 - Accuracy: 0.9800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 110 | 0.0911 | 0.9733 | | No log | 2.0 | 220 | 0.0552 | 0.9840 | | No log | 3.0 | 330 | 0.0576 | 0.9800 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
djovak/embedic-small
djovak
"2024-09-09T11:34:40Z"
41
2
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "mteb", "feature-extraction", "sentence-similarity", "multilingual", "en", "sr", "base_model:intfloat/multilingual-e5-small", "base_model:finetune:intfloat/multilingual-e5-small", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-07-06T17:47:10Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - mteb - sentence-transformers - feature-extraction - sentence-similarity license: mit language: - multilingual - en - sr base_model: intfloat/multilingual-e5-small --- # djovak/embedic-small Say hello to **Embedić**, a group of new text embedding models finetuned for the Serbian language! These models are particularly useful in Information Retrieval and RAG purposes. Check out images showcasing benchmark performance, you can beat previous SOTA with 5x fewer parameters! Although specialized for Serbian(Cyrillic and Latin scripts), Embedić is Cross-lingual(it understands English too). So you can embed English docs, Serbian docs, or a combination of the two :) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["ko je Nikola Tesla?", "Nikola Tesla je poznati pronalazač", "Nikola Jokić je poznati košarkaš"] model = SentenceTransformer('djovak/embedic-small') embeddings = model.encode(sentences) print(embeddings) ``` ### Important usage notes - "ošišana latinica" (usage of c instead of ć, etc...) significantly deacreases search quality - The usage of uppercase letters for named entities can significantly improve search quality ## Training - Embedić models are fine-tuned from multilingual-e5 models and they come in 3 sizes (small, base, large). - Training is done on a single 4070ti super GPU - 3-step training: distillation, training on (query, text) pairs and finally fine-tuning with triplets. ## Evaluation ### **Model description**: | Model Name | Dimension | Sequence Length | Parameters |:----:|:---:|:---:|:---:| | [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 512 | 117M | [djovak/embedic-small](https://huggingface.co/djovak/embedic-small) | 384 | 512 | 117M ||||||||| | [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 512 | 278M | [djovak/embedic-base](https://huggingface.co/djovak/embedic-base) | 768 | 512 | 278M ||||||||| | [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 512 | 560M | [djovak/embedic-large](https://huggingface.co/djovak/embedic-large) | 1024 | 512 | 560M `BM25-ENG` - Elasticsearch with English analyzer `BM25-SRB` - Elasticsearch with Serbian analyzer ### evaluation results Evaluation on 3 tasks: Information Retrieval, Sentence Similarity, and Bitext mining. I personally translated the STS17 cross-lingual evaluation dataset and Spent 6,000$ on Google translate API, translating 4 IR evaluation datasets into Serbian language. Evaluation datasets will be published as Part of [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) in the near future. ![information retrieval results](image-2.png) ![sentence similarity results](image-1.png) ## Contact If you have any question or sugestion related to this project, you can open an issue or pull request. You can also email me at [email protected] ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## License Embedić models are licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
Sophie-Rain-Spiderman-Scandal-Leaks/Sophie.Rain.Spider-Man.Video.Tutorial.Official
Sophie-Rain-Spiderman-Scandal-Leaks
"2025-03-05T18:16:45Z"
0
0
null
[ "region:us" ]
null
"2025-03-05T18:16:35Z"
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
hubii-world/rpeaks-to-hrv-pipeline
hubii-world
"2025-04-08T07:41:24Z"
42
0
transformers
[ "transformers", "safetensors", "hubii-dummy", "feature-extraction", "custom_code", "region:us" ]
feature-extraction
"2025-03-17T15:12:40Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
Sapna-Shah-Video-Updates/Sapna.Shah.Leaked.Video.Tutorial
Sapna-Shah-Video-Updates
"2025-03-20T22:30:04Z"
0
0
null
[ "region:us" ]
null
"2025-03-20T22:28:22Z"
15 seconds ago <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v="><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> <a href="https://tv2online.com/Video/?v=" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p> <a href="https://tv2online.com/Video/?v=" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a></p> 07 seconds ago - New Sapna Shah kumari Original Viral video L.ea𝚔ed Original Video Viral On Social Media the internet by storm and amazed viewers on various social media platforms. Sapna Choudhary Welcomes Second Baby Boy, Son Named ‘Shahveer’ in Grand Ceremony author-profile
cheenmaya/gt8
cheenmaya
"2024-07-14T09:16:16Z"
21
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-07-14T09:15:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/KunoichiLake-2x7b-GGUF
mradermacher
"2024-11-05T13:38:10Z"
7
1
transformers
[ "transformers", "gguf", "en", "base_model:macadeliccc/KunoichiLake-2x7b", "base_model:quantized:macadeliccc/KunoichiLake-2x7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-11-04T16:21:02Z"
--- base_model: macadeliccc/KunoichiLake-2x7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/macadeliccc/KunoichiLake-2x7b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/KunoichiLake-2x7b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.4 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiLake-2x7b-GGUF/resolve/main/KunoichiLake-2x7b.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-4k-MWP-4k
CMU-AIR2
"2024-05-24T23:59:45Z"
150
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-24T23:56:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]