modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 12:33:01
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
550 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 12:32:40
card
stringlengths
11
1.01M
aroot/eng-ind-sent_delfy
aroot
2023-07-31T13:33:28Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-31T13:02:39Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-ind-sent_delfy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-ind-sent_delfy This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8282 - Bleu: 20.3139 - Chrf: 50.1488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l54_v5
KingKazma
2023-07-31T13:31:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T13:31:16Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
OpenBuddy/openbuddy-30b-ggml
OpenBuddy
2023-07-31T13:27:23Z
0
4
transformers
[ "transformers", "text-generation", "zh", "en", "region:us" ]
text-generation
2023-05-25T17:20:11Z
--- language: - zh - en pipeline_tag: text-generation inference: false library_name: transformers --- # Recommended variations We recommend **openbuddy-llama-30b-v7.1-q3_K.bin.enc** for general tasks. ## Disclaimer OpenBuddy is provided as-is without any warranty of any kind, either express or implied. The authors and contributors shall not be held liable for any damages resulting from the use or inability to use this software. By using OpenBuddy, you agree to these terms and conditions. ## License Restrictions OpenBuddy is intended for non-commercial research purposes only, following the same restrictions as the LLAMA model. Any use outside of this scope is strictly prohibited. For more information, please refer to the LLAMA license.
szymonrucinski/krakowiak-7b
szymonrucinski
2023-07-31T13:07:22Z
9
8
peft
[ "peft", "region:us" ]
null
2023-07-20T00:12:49Z
--- library_name: peft --- <a target="_blank" href="https://colab.research.google.com/github/szymonrucinski/finetune-llm/blob/main/pollama_Xb_inference.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## Introduction Krakowiak-7B is a finetuned version of Meta's [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). It was trained on the modified and updated dataset originally created by [Chris Ociepa](https://huggingface.co/datasets/szymonindy/ociepa-raw-self-generated-instructions-pl) containing ~ 50K instructions. Making it one of the best and biggest available LLM's. Name [krakowiak](https://www.youtube.com/watch?v=OeQ6jYzt6cM) refers to one of the most popular and characteristic Polish folk dances, with its very lively, even wild, tempo, and long, easy strides, demonstrating spirited abandon and elegance at the same time. ## How to test it? The model can be ran using the Huggingface library or in the browser using this [Google Colab](https://colab.research.google.com/drive/1IM7j57g9ZHj-Pw2EXGyacNuKHjvK3pIc?usp=sharing) ## Training procedure Model was trained for 3 epochs, feel free [to read the report](https://api.wandb.ai/links/szymonindy/tkr343ad) The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
taehoon1lee/Taxi-v3
taehoon1lee
2023-07-31T13:00:11Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T13:00:09Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.61 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="taehoon1lee/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Ashish9947/llama-2-COT-finetuned
Ashish9947
2023-07-31T12:48:28Z
0
1
peft
[ "peft", "region:us" ]
null
2023-07-27T11:47:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
Sachinb4usueben/my-pet-dog
Sachinb4usueben
2023-07-31T12:40:56Z
0
0
null
[ "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-31T12:40:09Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by Sachinb4usueben following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: HICET284 Sample pictures of this concept: ![0](https://huggingface.co/Sachinb4usueben/my-pet-dog/resolve/main/sample_images/00.jpg)
najonajo/lora-trained-xl
najonajo
2023-07-31T12:34:12Z
2
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-07-31T10:54:58Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks dog tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - najonajo/lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: None.
ailabturkiye/damlaozudurubabacan
ailabturkiye
2023-07-31T12:10:56Z
0
0
null
[ "region:us" ]
null
2023-07-31T12:05:46Z
[![Discord Sunucumuz](https://img.shields.io/badge/Discord.gg%2F-AiLab-ailab )](discord.gg/ailab) ![Static Badge](https://img.shields.io/badge/AI%20LAB%20Hugging%20Face%20Organization-sa?style=plastic&labelColor=blue&color=blue) ![Static Badge](https://img.shields.io/badge/Yap%C4%B1mc%C4%B1%20Bilgisi%20Verilmeden%20Payla%C5%9F%C4%B1lmas%C4%B1%20Yasakt%C4%B1r!-s?style=plastic&labelColor=orange&color=red) # Damla Özüduru Babacan (Ses Sanatçısı) - RVC V2 250 Epoch **Damla Özüduru Babacan'ın ses modelidir.** **Rvc V2 | 2'er Dakikalık Dataset | 250 Epoch olarak eğitilmiştir.** _Dataset ve Train Benim Tarafımdan yapılmıştır.._ __Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__ ## Credits **Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.** - Discord: hydragee - YouTube: CoverLai (https://www.youtube.com/@coverlai) ![Static Badge](https://img.shields.io/badge/Yap%C4%B1mc%C4%B1%20Bilgisi%20Verilmeden%20Payla%C5%9F%C4%B1lmas%C4%B1%20Yasakt%C4%B1r!-s?style=plastic&labelColor=orange&color=red) [![Discord Sunucumuz](https://img.shields.io/badge/Discord.gg%2F-AiLab-ailab )](discord.gg/ailab) ![Static Badge](https://img.shields.io/badge/AI%20LAB%20Hugging%20Face%20Organization-sa?style=plastic&labelColor=blue&color=blue)
rdmpage/autotrain-bwpages-78824141089
rdmpage
2023-07-31T12:10:18Z
183
0
transformers
[ "transformers", "pytorch", "safetensors", "swin", "image-classification", "autotrain", "vision", "dataset:rdmpage/autotrain-data-bwpages", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-31T12:09:14Z
--- tags: - autotrain - vision - image-classification datasets: - rdmpage/autotrain-data-bwpages widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.4838235493718847 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 78824141089 - CO2 Emissions (in grams): 0.4838 ## Validation Metrics - Loss: 0.136 - Accuracy: 0.925 - Macro F1: 0.930 - Micro F1: 0.925 - Weighted F1: 0.924 - Macro Precision: 0.948 - Micro Precision: 0.925 - Weighted Precision: 0.928 - Macro Recall: 0.918 - Micro Recall: 0.925 - Weighted Recall: 0.925
Yntec/OrangeRemix
Yntec
2023-07-31T12:09:14Z
481
5
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "WarriorMama777", "NovelAI", "AnythingV3.0", "hesw23168", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-31T09:45:25Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - WarriorMama777 - NovelAI - AnythingV3.0 - hesw23168 --- # Orange Remix I did a blind test of all the models at https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models , then I found the most adorable one was BloodOrangeMix, the best picture composer was ElyOrangeMix, and the most coherent model was EerieOrangeMix_night. Then I made a 50/50 blend of the first two, creating OrangeRemixBeta, the most adorable and creative model but bad at coherence, and finally I added 15% of EerieOrangeMix_night to that, creating my favorite Orange model! It was a surprise that none of the Abyss models made it to the blend, but here's the full recipe to recreate Orange Remix: STEP 1: Add Difference 0.3 Primary Model # EerieOrangeMix_base Secondary Model # NovelAI animefull Tertiary Model # NovelAI sfw Merge Name # EerieOrangeMix_Night STEP 2: Add Difference 0.3 Primary Model # Elysium_Anime_V2 Secondary Model # NovelAI animefull Tertiary Model # NovelAI sfw Merge Name # tempmix STEP 3: Add Difference @ 1.0 Primary Model # tempmix Secondary Model # Gape60 Tertiary Model # NovelAI animefull Merge Name # ElyOrangeMix STEP 4: Add Difference @ 0.3 Primary Model # AnythingV3.0 Secondary Model # NovelAI animefull Tertiary Model # NovelAI sfw Merge Name # tempmix-2 STEP 5: Add Difference @ 1.0 Primary Model # tempmix-2 Secondary Model # Gape60 Tertiary Model # NovelAI animefull Merge Name # BloodOrangeMix STEP 6: Weighted Sum @ 0.5 Primary Model # BloodOrangeMix Secondary Model # ElyOrangeMix Merge Name # OrangeRemixBeta STEP 7: Weighted Sum @ 0.85 Primary Model # EerieOrangeMix_Night Secondary Model # OrangeRemixBeta Merge Name # OrangeRemix STEP 8: No Interpolation Primary Model # OrangeRemix Bake in VAE # color101VAE_v1.pt Merge Name # OrangeRemixCVAE Or something like that, I didn't keep notes!
segmind/tiny_lora_mxtun3_style
segmind
2023-07-31T12:05:29Z
8
2
diffusers
[ "diffusers", "bksdm", "bksdm-ttiny", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:recastai/LAION-art-EN-improved-captions", "arxiv:2305.15798", "base_model:SG161222/Realistic_Vision_V4.0_noVAE", "base_model:finetune:SG161222/Realistic_Vision_V4.0_noVAE", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-31T11:41:37Z
--- license: creativeml-openrail-m base_model: SG161222/Realistic_Vision_V4.0 datasets: - recastai/LAION-art-EN-improved-captions tags: - bksdm - bksdm-ttiny - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image Distillation This pipeline was distilled from **SG161222/Realistic_Vision_V4.0** on a Subset of **recastai/LAION-art-EN-improved-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Portrait of a pretty girl']: ![val_imgs_grid](./val_imgs_grid.png) This Pipeline is based upon [the paper](https://arxiv.org/pdf/2305.15798.pdf). Training Code can be found [here](https://github.com/segmind/BKSDM). ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("segmind/tiny-sd", torch_dtype=torch.float16) #Load LoRA finetune pipeline.load_lora_weights("segmind/tiny_lora_mxtun3_style", weight_name="sd15_tiny_mxtun3_style_lora.safetensors") prompt = "Portrait of a pretty girl" image = pipeline(prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * Steps: 125000 * Learning rate: 1e-4 * Batch size: 32 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16
PratikSahu/q-FrozenLake-v1-4x4-noSlippery
PratikSahu
2023-07-31T12:02:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T12:02:04Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PratikSahu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Ronit28G/LLAMA2
Ronit28G
2023-07-31T12:01:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T11:55:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
suhoRang/textual_inversion_cat
suhoRang
2023-07-31T11:59:39Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-31T10:25:31Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - suhoRang/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
Kamelowy/Nous-Hermes-Llama2-13b-Kimiko-GPTQ
Kamelowy
2023-07-31T11:54:37Z
7
2
transformers
[ "transformers", "llama", "text-generation", "llama-2", "en", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-07-31T11:41:01Z
--- inference: false language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 license: other --- # Model Card: Nous-Hermes-Llama-2-13b-Kimiko-GPTQ This is a GPTQ 4-bit quantization of Nous-Hermes-Llama2-13b-Kimiko-Lora-Merged (https://huggingface.co/Doctor-Shotgun/Nous-Hermes-Llama2-13b-Kimiko-Lora-Merged) ## Usage: Intended to be prompted either with the Alpaca instruction format of the NH-L2 base model: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Or the Kimiko lora instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details This model is a merge. Please refer to the link repositories of the base model and lora for details.
Sathyapriyaa/chainprompting-ner-finetuned
Sathyapriyaa
2023-07-31T11:45:56Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T11:45:54Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
mrmrob003/ppo-Pyramids
mrmrob003
2023-07-31T11:40:32Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-31T11:40:25Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mrmrob003/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
marko-vasic/distilbert-base-uncased-finetuned-emotion
marko-vasic
2023-07-31T11:25:32Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-31T11:08:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: split metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.924042449190291 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2163 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8335 | 1.0 | 250 | 0.3082 | 0.903 | 0.8997 | | 0.2503 | 2.0 | 500 | 0.2163 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
sashakttripathi/transformers-question-answer
sashakttripathi
2023-07-31T11:21:55Z
63
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-31T11:16:35Z
--- license: apache-2.0 base_model: distilbert-base-cased tags: - generated_from_keras_callback model-index: - name: transformers-question-answer results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # transformers-question-answer This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4951 - Validation Loss: 1.1651 - Epoch: 0 ## Model description This is a sample transformer trained for question-answer use case. I have used a pre-trained BERT model and then finetuned it using the hugging-face transformer library. ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.4951 | 1.1651 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.11.0 - Datasets 2.14.2 - Tokenizers 0.13.3
nokotin/Huggy
nokotin
2023-07-31T11:13:50Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-31T11:13:44Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: nokotin/Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mok33/q-taxi-v1-500x6
mok33
2023-07-31T11:12:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T11:12:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v1-500x6 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="mok33/q-taxi-v1-500x6", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
TencentARC/t2iadapter_seg_sd14v1
TencentARC
2023-07-31T11:12:44Z
13
0
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:01:31Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Segment T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on semantic segmentation for the stable diffusion 1.4 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers ``` 2. Run code: ```python import torch from PIL import Image import numpy as np from transformers import AutoImageProcessor, UperNetForSemanticSegmentation from diffusers import ( T2IAdapter, StableDiffusionAdapterPipeline ) ada_palette = np.asarray([ [0, 0, 0], [120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], [102, 255, 0], [92, 0, 255], ]) image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small") image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small") checkpoint = "lllyasviel/control_v11p_sd15_seg" image = Image.open('./images/seg_input.jpeg') pixel_values = image_processor(image, return_tensors="pt").pixel_values with torch.no_grad(): outputs = image_segmentor(pixel_values) seg = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3 for label, color in enumerate(ada_palette): color_seg[seg == label, :] = color color_seg = color_seg.astype(np.uint8) control_image = Image.fromarray(color_seg) control_image.save("./images/segment_image.png") adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_seg_sd14v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(0) sketch_image_out = pipe(prompt="motorcycles driving", image=control_image, generator=generator).images[0] sketch_image_out.save('./images/seg_image_out.png') ``` ![seg_input](./images/seg_input.jpeg) ![segment_image](./images/segment_image.png) ![seg_image_out](./images/seg_image_out.png)
TencentARC/t2iadapter_openpose_sd14v1
TencentARC
2023-07-31T11:12:29Z
536
1
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:01:22Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Openpose T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on openpose for the stable diffusion 1.4 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python from PIL import Image from diffusers import T2IAdapter, StableDiffusionAdapterPipeline import torch from controlnet_aux import OpenposeDetector openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet') image = Image.open('./images/openpose_input.png') image = openpose(image) image.save('./images/openpose.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_openpose_sd14v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(1) openpose_out = pipe(prompt="iron man flying", image=image, generator=generator).images[0] openpose_out.save('./images/openpose_out.png') ``` ![openpose_input](./images/openpose_input.png) ![openpose](./images/openpose.png) ![openpose_out](./images/openpose_out.png)
Jimmevdl/llama2-qlora-finetunined-french
Jimmevdl
2023-07-31T11:11:14Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-31T11:10:57Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
heyygarvit/contentmodel
heyygarvit
2023-07-31T11:10:58Z
0
0
peft
[ "peft", "license:openrail", "region:us" ]
null
2023-07-29T12:47:12Z
--- license: openrail library_name: peft ---
TencentARC/t2iadapter_depth_sd15v2
TencentARC
2023-07-31T11:10:44Z
2,355
2
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:01:05Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Depth T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on depth for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python from controlnet_aux import MidasDetector from PIL import Image from diffusers import T2IAdapter, StableDiffusionAdapterPipeline import torch midas = MidasDetector.from_pretrained("lllyasviel/Annotators") image = Image.open('./images/depth_input.png') image = midas(image) image.save('./images/depth.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd15v2", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(1) depth_out = pipe(prompt="storm trooper giving a speech", image=image, generator=generator).images[0] depth_out.save('./images/depth_output.png') ``` ![depth_input](./images/depth_input.png) ![depth](./images/depth.png) ![depth_output](./images/depth_output.png)
TencentARC/t2iadapter_depth_sd14v1
TencentARC
2023-07-31T11:10:31Z
18
0
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:00:56Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Depth T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on depth for the stable diffusion 1.4 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python from controlnet_aux import MidasDetector from PIL import Image from diffusers import T2IAdapter, StableDiffusionAdapterPipeline import torch midas = MidasDetector.from_pretrained("lllyasviel/Annotators") image = Image.open('./images/depth_input.png') image = midas(image) image.save('./images/depth.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(1) openpose_out = pipe(prompt="storm trooper giving a speech", image=image, generator=generator).images[0] openpose_out.save('./images/depth_output.png') ``` ![depth_input](./images/depth_input.png) ![depth](./images/depth.png) ![depth_output](./images/depth_output.png)
TencentARC/t2iadapter_color_sd14v1
TencentARC
2023-07-31T11:09:11Z
49
8
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:00:48Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Color T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on color palettes for the stable diffusion 1.4 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers ``` 2. Run code: ```python from PIL import Image import torch from diffusers import StableDiffusionAdapterPipeline, T2IAdapter image = Image.open('./images/color_ref.png') color_palette = image.resize((8, 8)) color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) color_palette.save('./images/color_palette.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", adapter=adapter, torch_dtype=torch.float16, ) pipe.to("cuda") generator = torch.manual_seed(0) out_image = pipe( "At night, glowing cubes in front of the beach", image=color_palette, generator=generator, ).images[0] out_image.save('./images/color_out_image.png') ``` ![color_ref](./images/color_ref.png) ![color_palette](./images/color_palette.png) ![color_out_image](./images/color_out_image.png)
TencentARC/t2iadapter_canny_sd14v1
TencentARC
2023-07-31T11:08:33Z
29
1
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T18:59:27Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Canny T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on canny edge's for the stable diffusion 1.4 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers opencv-contrib-python ``` 2. Run code: ```python import cv2 from PIL import Image import torch import numpy as np from diffusers import T2IAdapter, StableDiffusionAdapterPipeline image = Image.open('./images/canny_input.png') image = np.array(image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = Image.fromarray(image) image.save('./images/canny.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_canny_sd14v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", adapter=adapter, torch_dtype=torch.float16, ) pipe.to("cuda") generator = torch.manual_seed(0) out_image = pipe( "a rabbit wearing glasses", image=image, generator=generator, ).images[0] out_image.save('./images/canny_out.png') ``` ![canny_input](./images/canny_input.png) ![canny](./images/canny.png) ![canny_out](./images/canny_out.png)
imvladikon/het5-large
imvladikon
2023-07-31T11:05:05Z
116
1
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "he", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-20T01:58:08Z
--- language: - he --- An experimental model for Hebrew with pruned embeddings of the mT5-large model
lvignaliAWS/sagemaker-distilbert-emotion
lvignaliAWS
2023-07-31T11:02:48Z
103
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-31T11:00:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: sagemaker-distilbert-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: test args: split metrics: - name: Accuracy type: accuracy value: 0.9155 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2592 - Accuracy: 0.9155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9591 | 1.0 | 500 | 0.2592 | 0.9155 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
KokSoon/llama2-qlora-finetunined-french
KokSoon
2023-07-31T11:01:14Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T11:01:09Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
mrmrob003/ppo-SnowballTarget
mrmrob003
2023-07-31T11:00:39Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-31T11:00:33Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mrmrob003/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e5_s6789_v3_l6_r4
KingKazma
2023-07-31T10:59:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:59:47Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
minatosnow/swinv2-base-patch4-window8-256-mineral
minatosnow
2023-07-31T10:58:53Z
79
0
transformers
[ "transformers", "pytorch", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-base-patch4-window8-256", "base_model:finetune:microsoft/swinv2-base-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-30T14:25:17Z
--- license: apache-2.0 base_model: microsoft/swinv2-base-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-base-patch4-window8-256-mineral results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.245 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-base-patch4-window8-256-mineral This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 5.1457 - Accuracy: 0.245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 5.6939 | 0.96 | 18 | 5.6774 | 0.0067 | | 5.6713 | 1.97 | 37 | 5.6701 | 0.005 | | 5.665 | 2.99 | 56 | 5.6592 | 0.0033 | | 5.6497 | 4.0 | 75 | 5.6487 | 0.005 | | 5.6375 | 4.96 | 93 | 5.6375 | 0.0083 | | 5.6141 | 5.97 | 112 | 5.6255 | 0.0133 | | 5.5981 | 6.99 | 131 | 5.6123 | 0.0167 | | 5.5718 | 8.0 | 150 | 5.5961 | 0.0117 | | 5.5526 | 8.96 | 168 | 5.5777 | 0.0117 | | 5.5 | 9.97 | 187 | 5.5554 | 0.015 | | 5.4391 | 10.99 | 206 | 5.5290 | 0.0167 | | 5.3112 | 12.0 | 225 | 5.4914 | 0.0217 | | 5.25 | 12.96 | 243 | 5.4305 | 0.0283 | | 5.1361 | 13.97 | 262 | 5.3729 | 0.045 | | 4.985 | 14.99 | 281 | 5.2795 | 0.0483 | | 4.8031 | 16.0 | 300 | 5.1991 | 0.0633 | | 4.6843 | 16.96 | 318 | 5.0976 | 0.0683 | | 4.3959 | 17.97 | 337 | 4.9828 | 0.0783 | | 4.2277 | 18.99 | 356 | 4.8522 | 0.0883 | | 3.9594 | 20.0 | 375 | 4.7467 | 0.1017 | | 3.7637 | 20.96 | 393 | 4.6450 | 0.1183 | | 3.4748 | 21.97 | 412 | 4.5736 | 0.1317 | | 3.2937 | 22.99 | 431 | 4.4707 | 0.13 | | 3.1059 | 24.0 | 450 | 4.4103 | 0.1567 | | 2.8278 | 24.96 | 468 | 4.3476 | 0.1417 | | 2.7093 | 25.97 | 487 | 4.2624 | 0.1517 | | 2.4771 | 26.99 | 506 | 4.2065 | 0.1667 | | 2.3164 | 28.0 | 525 | 4.1656 | 0.17 | | 2.1912 | 28.96 | 543 | 4.1606 | 0.1617 | | 1.9385 | 29.97 | 562 | 4.1411 | 0.1717 | | 1.7912 | 30.99 | 581 | 4.1030 | 0.1667 | | 1.6852 | 32.0 | 600 | 4.0695 | 0.1833 | | 1.478 | 32.96 | 618 | 4.0546 | 0.1717 | | 1.3769 | 33.97 | 637 | 4.0407 | 0.1917 | | 1.2175 | 34.99 | 656 | 4.0224 | 0.19 | | 1.1616 | 36.0 | 675 | 4.0421 | 0.1867 | | 1.0305 | 36.96 | 693 | 4.0470 | 0.1933 | | 0.9513 | 37.97 | 712 | 4.0161 | 0.195 | | 0.8428 | 38.99 | 731 | 4.0153 | 0.2033 | | 0.7385 | 40.0 | 750 | 4.0600 | 0.1867 | | 0.667 | 40.96 | 768 | 4.0568 | 0.215 | | 0.6017 | 41.97 | 787 | 4.0459 | 0.2033 | | 0.6252 | 42.99 | 806 | 4.0209 | 0.2183 | | 0.5604 | 44.0 | 825 | 4.1177 | 0.2133 | | 0.4691 | 44.96 | 843 | 4.0966 | 0.1967 | | 0.4445 | 45.97 | 862 | 4.1254 | 0.215 | | 0.4151 | 46.99 | 881 | 4.1253 | 0.2133 | | 0.3903 | 48.0 | 900 | 4.1026 | 0.195 | | 0.3551 | 48.96 | 918 | 4.1544 | 0.2033 | | 0.3656 | 49.97 | 937 | 4.2111 | 0.2117 | | 0.3651 | 50.99 | 956 | 4.2143 | 0.1983 | | 0.3282 | 52.0 | 975 | 4.1838 | 0.2217 | | 0.2737 | 52.96 | 993 | 4.1866 | 0.2083 | | 0.2867 | 53.97 | 1012 | 4.2628 | 0.1983 | | 0.3202 | 54.99 | 1031 | 4.2542 | 0.2117 | | 0.2761 | 56.0 | 1050 | 4.2476 | 0.2183 | | 0.262 | 56.96 | 1068 | 4.2466 | 0.21 | | 0.2779 | 57.97 | 1087 | 4.2886 | 0.215 | | 0.2278 | 58.99 | 1106 | 4.2835 | 0.22 | | 0.2597 | 60.0 | 1125 | 4.2966 | 0.2183 | | 0.2364 | 60.96 | 1143 | 4.3238 | 0.2167 | | 0.2523 | 61.97 | 1162 | 4.2324 | 0.2083 | | 0.2199 | 62.99 | 1181 | 4.3234 | 0.2033 | | 0.2264 | 64.0 | 1200 | 4.3092 | 0.2167 | | 0.2004 | 64.96 | 1218 | 4.3842 | 0.2017 | | 0.1958 | 65.97 | 1237 | 4.3319 | 0.2033 | | 0.1942 | 66.99 | 1256 | 4.3161 | 0.225 | | 0.2094 | 68.0 | 1275 | 4.3120 | 0.2267 | | 0.1826 | 68.96 | 1293 | 4.3541 | 0.2267 | | 0.1984 | 69.97 | 1312 | 4.3720 | 0.2317 | | 0.1863 | 70.99 | 1331 | 4.3854 | 0.225 | | 0.2019 | 72.0 | 1350 | 4.3635 | 0.215 | | 0.1679 | 72.96 | 1368 | 4.3958 | 0.2183 | | 0.1765 | 73.97 | 1387 | 4.4110 | 0.2133 | | 0.1647 | 74.99 | 1406 | 4.4675 | 0.2167 | | 0.151 | 76.0 | 1425 | 4.4919 | 0.2183 | | 0.1621 | 76.96 | 1443 | 4.4519 | 0.2183 | | 0.1417 | 77.97 | 1462 | 4.4599 | 0.2017 | | 0.1685 | 78.99 | 1481 | 4.5555 | 0.2133 | | 0.1637 | 80.0 | 1500 | 4.4596 | 0.2117 | | 0.1692 | 80.96 | 1518 | 4.4194 | 0.2233 | | 0.1771 | 81.97 | 1537 | 4.4831 | 0.2217 | | 0.1433 | 82.99 | 1556 | 4.5271 | 0.1983 | | 0.1615 | 84.0 | 1575 | 4.4425 | 0.21 | | 0.1605 | 84.96 | 1593 | 4.4559 | 0.2183 | | 0.1402 | 85.97 | 1612 | 4.5123 | 0.21 | | 0.1639 | 86.99 | 1631 | 4.5368 | 0.2117 | | 0.1333 | 88.0 | 1650 | 4.5109 | 0.2217 | | 0.1405 | 88.96 | 1668 | 4.5220 | 0.2217 | | 0.1427 | 89.97 | 1687 | 4.5414 | 0.225 | | 0.1346 | 90.99 | 1706 | 4.4954 | 0.2317 | | 0.1494 | 92.0 | 1725 | 4.5996 | 0.2217 | | 0.1432 | 92.96 | 1743 | 4.5152 | 0.2167 | | 0.1276 | 93.97 | 1762 | 4.5623 | 0.2217 | | 0.1344 | 94.99 | 1781 | 4.5049 | 0.2233 | | 0.1157 | 96.0 | 1800 | 4.5413 | 0.22 | | 0.1324 | 96.96 | 1818 | 4.5660 | 0.225 | | 0.1262 | 97.97 | 1837 | 4.5358 | 0.2317 | | 0.1299 | 98.99 | 1856 | 4.6085 | 0.225 | | 0.1077 | 100.0 | 1875 | 4.5516 | 0.2267 | | 0.1386 | 100.96 | 1893 | 4.5751 | 0.23 | | 0.1463 | 101.97 | 1912 | 4.6149 | 0.2117 | | 0.0921 | 102.99 | 1931 | 4.5543 | 0.2383 | | 0.1094 | 104.0 | 1950 | 4.6144 | 0.2283 | | 0.1327 | 104.96 | 1968 | 4.5966 | 0.2283 | | 0.101 | 105.97 | 1987 | 4.5869 | 0.22 | | 0.1275 | 106.99 | 2006 | 4.5588 | 0.2367 | | 0.1175 | 108.0 | 2025 | 4.6091 | 0.2133 | | 0.1031 | 108.96 | 2043 | 4.6347 | 0.22 | | 0.1086 | 109.97 | 2062 | 4.5894 | 0.215 | | 0.1158 | 110.99 | 2081 | 4.6496 | 0.2267 | | 0.1091 | 112.0 | 2100 | 4.6218 | 0.22 | | 0.0885 | 112.96 | 2118 | 4.6319 | 0.2133 | | 0.0732 | 113.97 | 2137 | 4.7100 | 0.21 | | 0.113 | 114.99 | 2156 | 4.6596 | 0.21 | | 0.0872 | 116.0 | 2175 | 4.6277 | 0.2217 | | 0.0862 | 116.96 | 2193 | 4.6262 | 0.2333 | | 0.0762 | 117.97 | 2212 | 4.6078 | 0.2267 | | 0.0916 | 118.99 | 2231 | 4.6198 | 0.215 | | 0.1158 | 120.0 | 2250 | 4.6560 | 0.225 | | 0.083 | 120.96 | 2268 | 4.5878 | 0.2267 | | 0.0855 | 121.97 | 2287 | 4.6666 | 0.24 | | 0.0963 | 122.99 | 2306 | 4.6186 | 0.2233 | | 0.088 | 124.0 | 2325 | 4.6250 | 0.2433 | | 0.0867 | 124.96 | 2343 | 4.6591 | 0.24 | | 0.0858 | 125.97 | 2362 | 4.6467 | 0.2267 | | 0.0928 | 126.99 | 2381 | 4.6457 | 0.2233 | | 0.1189 | 128.0 | 2400 | 4.7195 | 0.2183 | | 0.0816 | 128.96 | 2418 | 4.6935 | 0.2233 | | 0.09 | 129.97 | 2437 | 4.7062 | 0.2183 | | 0.0958 | 130.99 | 2456 | 4.6826 | 0.2267 | | 0.0583 | 132.0 | 2475 | 4.6903 | 0.23 | | 0.0878 | 132.96 | 2493 | 4.7640 | 0.2217 | | 0.0967 | 133.97 | 2512 | 4.7564 | 0.2333 | | 0.0902 | 134.99 | 2531 | 4.7551 | 0.225 | | 0.0709 | 136.0 | 2550 | 4.7972 | 0.23 | | 0.0724 | 136.96 | 2568 | 4.8073 | 0.22 | | 0.0901 | 137.97 | 2587 | 4.7568 | 0.2283 | | 0.0968 | 138.99 | 2606 | 4.7389 | 0.2383 | | 0.075 | 140.0 | 2625 | 4.7521 | 0.2283 | | 0.0955 | 140.96 | 2643 | 4.7091 | 0.2217 | | 0.0648 | 141.97 | 2662 | 4.7522 | 0.2283 | | 0.0792 | 142.99 | 2681 | 4.7606 | 0.2233 | | 0.0828 | 144.0 | 2700 | 4.7383 | 0.23 | | 0.0722 | 144.96 | 2718 | 4.7851 | 0.2383 | | 0.0958 | 145.97 | 2737 | 4.7559 | 0.235 | | 0.0761 | 146.99 | 2756 | 4.8086 | 0.23 | | 0.0926 | 148.0 | 2775 | 4.7643 | 0.2317 | | 0.0735 | 148.96 | 2793 | 4.7936 | 0.23 | | 0.0957 | 149.97 | 2812 | 4.7229 | 0.23 | | 0.0804 | 150.99 | 2831 | 4.7744 | 0.2133 | | 0.0807 | 152.0 | 2850 | 4.8072 | 0.2117 | | 0.0761 | 152.96 | 2868 | 4.7777 | 0.2283 | | 0.0752 | 153.97 | 2887 | 4.8695 | 0.2183 | | 0.0588 | 154.99 | 2906 | 4.8292 | 0.2283 | | 0.0735 | 156.0 | 2925 | 4.7753 | 0.2233 | | 0.0867 | 156.96 | 2943 | 4.7585 | 0.225 | | 0.094 | 157.97 | 2962 | 4.8235 | 0.2317 | | 0.0908 | 158.99 | 2981 | 4.8720 | 0.23 | | 0.0697 | 160.0 | 3000 | 4.8698 | 0.22 | | 0.0701 | 160.96 | 3018 | 4.8097 | 0.2283 | | 0.0804 | 161.97 | 3037 | 4.7381 | 0.2217 | | 0.0874 | 162.99 | 3056 | 4.7458 | 0.23 | | 0.0661 | 164.0 | 3075 | 4.7571 | 0.225 | | 0.0715 | 164.96 | 3093 | 4.8221 | 0.2367 | | 0.0599 | 165.97 | 3112 | 4.6964 | 0.2467 | | 0.0708 | 166.99 | 3131 | 4.8396 | 0.2317 | | 0.0633 | 168.0 | 3150 | 4.8040 | 0.22 | | 0.0531 | 168.96 | 3168 | 4.8200 | 0.2217 | | 0.0689 | 169.97 | 3187 | 4.8330 | 0.2333 | | 0.0502 | 170.99 | 3206 | 4.8688 | 0.2283 | | 0.0607 | 172.0 | 3225 | 4.8696 | 0.2333 | | 0.0705 | 172.96 | 3243 | 4.8514 | 0.225 | | 0.0719 | 173.97 | 3262 | 4.8854 | 0.2267 | | 0.0852 | 174.99 | 3281 | 4.8463 | 0.2233 | | 0.0525 | 176.0 | 3300 | 4.8801 | 0.2383 | | 0.069 | 176.96 | 3318 | 4.9139 | 0.2383 | | 0.0716 | 177.97 | 3337 | 4.9129 | 0.225 | | 0.0681 | 178.99 | 3356 | 4.8785 | 0.2283 | | 0.0656 | 180.0 | 3375 | 4.8776 | 0.24 | | 0.0866 | 180.96 | 3393 | 4.8923 | 0.24 | | 0.0588 | 181.97 | 3412 | 4.8773 | 0.2317 | | 0.0542 | 182.99 | 3431 | 4.8637 | 0.2417 | | 0.0508 | 184.0 | 3450 | 4.8914 | 0.23 | | 0.0591 | 184.96 | 3468 | 4.8835 | 0.225 | | 0.0626 | 185.97 | 3487 | 4.9119 | 0.2317 | | 0.0708 | 186.99 | 3506 | 4.8882 | 0.2217 | | 0.0651 | 188.0 | 3525 | 4.8978 | 0.22 | | 0.0609 | 188.96 | 3543 | 4.8657 | 0.2267 | | 0.0519 | 189.97 | 3562 | 4.9336 | 0.21 | | 0.0687 | 190.99 | 3581 | 4.8825 | 0.2217 | | 0.0401 | 192.0 | 3600 | 4.9984 | 0.2133 | | 0.0532 | 192.96 | 3618 | 4.8797 | 0.24 | | 0.0515 | 193.97 | 3637 | 4.8716 | 0.2317 | | 0.0735 | 194.99 | 3656 | 4.9249 | 0.2217 | | 0.0708 | 196.0 | 3675 | 4.9456 | 0.2083 | | 0.0612 | 196.96 | 3693 | 4.9355 | 0.23 | | 0.0511 | 197.97 | 3712 | 4.9061 | 0.215 | | 0.0573 | 198.99 | 3731 | 4.9662 | 0.2117 | | 0.0482 | 200.0 | 3750 | 4.9992 | 0.2133 | | 0.0454 | 200.96 | 3768 | 5.0024 | 0.2217 | | 0.072 | 201.97 | 3787 | 4.9026 | 0.2283 | | 0.0583 | 202.99 | 3806 | 4.9121 | 0.2283 | | 0.0655 | 204.0 | 3825 | 4.9426 | 0.23 | | 0.0375 | 204.96 | 3843 | 4.9971 | 0.2267 | | 0.0652 | 205.97 | 3862 | 4.9414 | 0.2283 | | 0.0514 | 206.99 | 3881 | 5.0227 | 0.2333 | | 0.0677 | 208.0 | 3900 | 5.0196 | 0.2217 | | 0.0805 | 208.96 | 3918 | 4.9773 | 0.2233 | | 0.0556 | 209.97 | 3937 | 4.9303 | 0.235 | | 0.0508 | 210.99 | 3956 | 4.8852 | 0.2333 | | 0.0471 | 212.0 | 3975 | 4.9395 | 0.24 | | 0.047 | 212.96 | 3993 | 4.9143 | 0.23 | | 0.0438 | 213.97 | 4012 | 4.8727 | 0.2333 | | 0.0769 | 214.99 | 4031 | 4.9319 | 0.235 | | 0.0516 | 216.0 | 4050 | 4.9909 | 0.225 | | 0.0413 | 216.96 | 4068 | 4.9651 | 0.2167 | | 0.0402 | 217.97 | 4087 | 4.9235 | 0.2233 | | 0.036 | 218.99 | 4106 | 4.9646 | 0.2267 | | 0.0375 | 220.0 | 4125 | 4.9655 | 0.2267 | | 0.0566 | 220.96 | 4143 | 4.9532 | 0.23 | | 0.0591 | 221.97 | 4162 | 4.9400 | 0.2317 | | 0.0602 | 222.99 | 4181 | 4.9306 | 0.2417 | | 0.0339 | 224.0 | 4200 | 4.9925 | 0.2383 | | 0.0628 | 224.96 | 4218 | 4.9416 | 0.24 | | 0.0495 | 225.97 | 4237 | 4.9398 | 0.25 | | 0.0545 | 226.99 | 4256 | 4.9522 | 0.2367 | | 0.045 | 228.0 | 4275 | 4.8830 | 0.2367 | | 0.036 | 228.96 | 4293 | 4.9543 | 0.235 | | 0.0477 | 229.97 | 4312 | 4.9769 | 0.2267 | | 0.0427 | 230.99 | 4331 | 4.9233 | 0.23 | | 0.0419 | 232.0 | 4350 | 4.9587 | 0.24 | | 0.0402 | 232.96 | 4368 | 4.9977 | 0.2233 | | 0.0468 | 233.97 | 4387 | 5.0033 | 0.24 | | 0.0554 | 234.99 | 4406 | 4.9908 | 0.24 | | 0.0503 | 236.0 | 4425 | 4.9143 | 0.2433 | | 0.0462 | 236.96 | 4443 | 4.8740 | 0.245 | | 0.0354 | 237.97 | 4462 | 4.9383 | 0.2333 | | 0.0381 | 238.99 | 4481 | 5.0228 | 0.2467 | | 0.0358 | 240.0 | 4500 | 5.0114 | 0.2417 | | 0.0264 | 240.96 | 4518 | 5.0244 | 0.23 | | 0.0512 | 241.97 | 4537 | 5.0987 | 0.22 | | 0.0555 | 242.99 | 4556 | 4.9946 | 0.2233 | | 0.0545 | 244.0 | 4575 | 5.0295 | 0.23 | | 0.0395 | 244.96 | 4593 | 5.0320 | 0.225 | | 0.0512 | 245.97 | 4612 | 4.9736 | 0.2317 | | 0.0343 | 246.99 | 4631 | 4.9499 | 0.2383 | | 0.0506 | 248.0 | 4650 | 4.9478 | 0.24 | | 0.0642 | 248.96 | 4668 | 4.9233 | 0.2417 | | 0.035 | 249.97 | 4687 | 4.9348 | 0.2317 | | 0.0408 | 250.99 | 4706 | 4.9228 | 0.2483 | | 0.0431 | 252.0 | 4725 | 5.0074 | 0.24 | | 0.0273 | 252.96 | 4743 | 4.9480 | 0.235 | | 0.0495 | 253.97 | 4762 | 4.9539 | 0.2333 | | 0.0431 | 254.99 | 4781 | 5.0146 | 0.245 | | 0.0339 | 256.0 | 4800 | 5.0659 | 0.225 | | 0.0544 | 256.96 | 4818 | 5.0378 | 0.2417 | | 0.0365 | 257.97 | 4837 | 5.0133 | 0.2433 | | 0.047 | 258.99 | 4856 | 4.9964 | 0.2367 | | 0.048 | 260.0 | 4875 | 5.0102 | 0.2383 | | 0.0469 | 260.96 | 4893 | 4.9275 | 0.245 | | 0.0399 | 261.97 | 4912 | 4.9794 | 0.2483 | | 0.0372 | 262.99 | 4931 | 4.9892 | 0.2383 | | 0.0538 | 264.0 | 4950 | 4.9841 | 0.2333 | | 0.0402 | 264.96 | 4968 | 5.0038 | 0.2383 | | 0.0507 | 265.97 | 4987 | 5.0184 | 0.2333 | | 0.0472 | 266.99 | 5006 | 4.9750 | 0.245 | | 0.047 | 268.0 | 5025 | 5.0005 | 0.2533 | | 0.0436 | 268.96 | 5043 | 4.9527 | 0.25 | | 0.0323 | 269.97 | 5062 | 4.9495 | 0.2517 | | 0.0454 | 270.99 | 5081 | 4.9883 | 0.2433 | | 0.0477 | 272.0 | 5100 | 4.9656 | 0.25 | | 0.053 | 272.96 | 5118 | 4.9082 | 0.245 | | 0.0398 | 273.97 | 5137 | 4.9618 | 0.245 | | 0.0347 | 274.99 | 5156 | 4.9594 | 0.2467 | | 0.0447 | 276.0 | 5175 | 4.9556 | 0.2417 | | 0.0312 | 276.96 | 5193 | 4.9943 | 0.2283 | | 0.0521 | 277.97 | 5212 | 5.0603 | 0.2367 | | 0.0386 | 278.99 | 5231 | 5.0410 | 0.2183 | | 0.0276 | 280.0 | 5250 | 4.9705 | 0.2383 | | 0.0532 | 280.96 | 5268 | 4.9813 | 0.235 | | 0.0627 | 281.97 | 5287 | 4.9953 | 0.2317 | | 0.0455 | 282.99 | 5306 | 4.9607 | 0.2433 | | 0.0418 | 284.0 | 5325 | 4.9816 | 0.2417 | | 0.0355 | 284.96 | 5343 | 4.9670 | 0.245 | | 0.0307 | 285.97 | 5362 | 4.9994 | 0.2367 | | 0.0504 | 286.99 | 5381 | 5.0265 | 0.24 | | 0.0368 | 288.0 | 5400 | 5.0336 | 0.235 | | 0.035 | 288.96 | 5418 | 5.0561 | 0.2417 | | 0.0518 | 289.97 | 5437 | 5.0769 | 0.2433 | | 0.0383 | 290.99 | 5456 | 5.0629 | 0.235 | | 0.0381 | 292.0 | 5475 | 4.9694 | 0.2367 | | 0.0307 | 292.96 | 5493 | 4.9370 | 0.2417 | | 0.0447 | 293.97 | 5512 | 4.9549 | 0.2333 | | 0.0499 | 294.99 | 5531 | 4.9883 | 0.2333 | | 0.045 | 296.0 | 5550 | 4.9923 | 0.24 | | 0.0579 | 296.96 | 5568 | 4.9793 | 0.24 | | 0.0527 | 297.97 | 5587 | 5.0027 | 0.2333 | | 0.0353 | 298.99 | 5606 | 5.0196 | 0.2283 | | 0.0247 | 300.0 | 5625 | 5.0492 | 0.24 | | 0.0339 | 300.96 | 5643 | 5.0539 | 0.2317 | | 0.0525 | 301.97 | 5662 | 5.1091 | 0.2233 | | 0.0339 | 302.99 | 5681 | 5.0659 | 0.2333 | | 0.0406 | 304.0 | 5700 | 5.0559 | 0.2233 | | 0.0252 | 304.96 | 5718 | 5.0228 | 0.2233 | | 0.0358 | 305.97 | 5737 | 5.0661 | 0.2333 | | 0.031 | 306.99 | 5756 | 5.0571 | 0.2383 | | 0.0334 | 308.0 | 5775 | 5.0452 | 0.2417 | | 0.0368 | 308.96 | 5793 | 5.0873 | 0.225 | | 0.042 | 309.97 | 5812 | 5.1043 | 0.24 | | 0.0375 | 310.99 | 5831 | 5.0588 | 0.2383 | | 0.026 | 312.0 | 5850 | 5.0150 | 0.2417 | | 0.0326 | 312.96 | 5868 | 5.0101 | 0.2433 | | 0.0286 | 313.97 | 5887 | 5.0106 | 0.245 | | 0.0164 | 314.99 | 5906 | 5.0428 | 0.24 | | 0.0274 | 316.0 | 5925 | 5.0479 | 0.2317 | | 0.0263 | 316.96 | 5943 | 4.9870 | 0.24 | | 0.0371 | 317.97 | 5962 | 4.9846 | 0.2417 | | 0.0355 | 318.99 | 5981 | 5.0133 | 0.2367 | | 0.0342 | 320.0 | 6000 | 5.0306 | 0.2467 | | 0.0377 | 320.96 | 6018 | 5.0073 | 0.235 | | 0.0369 | 321.97 | 6037 | 4.9949 | 0.2517 | | 0.0375 | 322.99 | 6056 | 5.0409 | 0.2483 | | 0.0287 | 324.0 | 6075 | 5.0325 | 0.24 | | 0.0319 | 324.96 | 6093 | 5.0241 | 0.245 | | 0.0315 | 325.97 | 6112 | 5.0320 | 0.235 | | 0.0404 | 326.99 | 6131 | 5.0403 | 0.2383 | | 0.0287 | 328.0 | 6150 | 5.0792 | 0.2383 | | 0.0611 | 328.96 | 6168 | 5.0378 | 0.2367 | | 0.0296 | 329.97 | 6187 | 5.0192 | 0.2333 | | 0.0259 | 330.99 | 6206 | 5.0362 | 0.2317 | | 0.0257 | 332.0 | 6225 | 5.0494 | 0.2433 | | 0.0294 | 332.96 | 6243 | 5.0462 | 0.2383 | | 0.0483 | 333.97 | 6262 | 5.0474 | 0.225 | | 0.0332 | 334.99 | 6281 | 5.0521 | 0.2233 | | 0.0278 | 336.0 | 6300 | 5.0483 | 0.2367 | | 0.0241 | 336.96 | 6318 | 5.0617 | 0.2367 | | 0.0356 | 337.97 | 6337 | 5.0549 | 0.2367 | | 0.0326 | 338.99 | 6356 | 5.0920 | 0.225 | | 0.0255 | 340.0 | 6375 | 5.1311 | 0.2283 | | 0.0415 | 340.96 | 6393 | 5.1072 | 0.2267 | | 0.0241 | 341.97 | 6412 | 5.0731 | 0.2217 | | 0.0343 | 342.99 | 6431 | 5.0496 | 0.2283 | | 0.0196 | 344.0 | 6450 | 5.0186 | 0.2383 | | 0.0451 | 344.96 | 6468 | 5.0546 | 0.235 | | 0.0291 | 345.97 | 6487 | 5.0705 | 0.2367 | | 0.0244 | 346.99 | 6506 | 5.0857 | 0.2267 | | 0.0356 | 348.0 | 6525 | 5.0919 | 0.2367 | | 0.03 | 348.96 | 6543 | 5.0770 | 0.2333 | | 0.033 | 349.97 | 6562 | 5.0676 | 0.2333 | | 0.0231 | 350.99 | 6581 | 5.0522 | 0.2383 | | 0.0348 | 352.0 | 6600 | 5.0550 | 0.2283 | | 0.0234 | 352.96 | 6618 | 5.0709 | 0.2317 | | 0.0302 | 353.97 | 6637 | 5.0728 | 0.2283 | | 0.0269 | 354.99 | 6656 | 5.1108 | 0.2217 | | 0.0287 | 356.0 | 6675 | 5.1299 | 0.2283 | | 0.0459 | 356.96 | 6693 | 5.0675 | 0.2333 | | 0.0357 | 357.97 | 6712 | 5.0613 | 0.235 | | 0.0254 | 358.99 | 6731 | 5.0390 | 0.245 | | 0.0208 | 360.0 | 6750 | 5.0600 | 0.2333 | | 0.0264 | 360.96 | 6768 | 5.1158 | 0.24 | | 0.0248 | 361.97 | 6787 | 5.1090 | 0.2417 | | 0.0289 | 362.99 | 6806 | 5.1165 | 0.2417 | | 0.0234 | 364.0 | 6825 | 5.0914 | 0.24 | | 0.0354 | 364.96 | 6843 | 5.0802 | 0.2317 | | 0.0261 | 365.97 | 6862 | 5.0958 | 0.2433 | | 0.0413 | 366.99 | 6881 | 5.1149 | 0.2333 | | 0.0328 | 368.0 | 6900 | 5.1452 | 0.2333 | | 0.0285 | 368.96 | 6918 | 5.1633 | 0.2317 | | 0.035 | 369.97 | 6937 | 5.1334 | 0.2367 | | 0.0223 | 370.99 | 6956 | 5.1195 | 0.2483 | | 0.0216 | 372.0 | 6975 | 5.0994 | 0.24 | | 0.0246 | 372.96 | 6993 | 5.0976 | 0.2467 | | 0.0195 | 373.97 | 7012 | 5.1302 | 0.2383 | | 0.0323 | 374.99 | 7031 | 5.1269 | 0.245 | | 0.0402 | 376.0 | 7050 | 5.1254 | 0.23 | | 0.0249 | 376.96 | 7068 | 5.1320 | 0.2267 | | 0.0282 | 377.97 | 7087 | 5.1341 | 0.2233 | | 0.0284 | 378.99 | 7106 | 5.1457 | 0.2267 | | 0.0247 | 380.0 | 7125 | 5.1302 | 0.2283 | | 0.0254 | 380.96 | 7143 | 5.1563 | 0.2317 | | 0.0307 | 381.97 | 7162 | 5.1569 | 0.2283 | | 0.025 | 382.99 | 7181 | 5.1598 | 0.2267 | | 0.0438 | 384.0 | 7200 | 5.1443 | 0.23 | | 0.0318 | 384.96 | 7218 | 5.1208 | 0.2367 | | 0.0131 | 385.97 | 7237 | 5.0810 | 0.2317 | | 0.0223 | 386.99 | 7256 | 5.0733 | 0.235 | | 0.0212 | 388.0 | 7275 | 5.1003 | 0.23 | | 0.0269 | 388.96 | 7293 | 5.1205 | 0.2283 | | 0.0334 | 389.97 | 7312 | 5.1224 | 0.2283 | | 0.0285 | 390.99 | 7331 | 5.1208 | 0.2283 | | 0.0427 | 392.0 | 7350 | 5.1224 | 0.2383 | | 0.0228 | 392.96 | 7368 | 5.0950 | 0.2333 | | 0.0144 | 393.97 | 7387 | 5.0910 | 0.235 | | 0.0293 | 394.99 | 7406 | 5.0855 | 0.2483 | | 0.0135 | 396.0 | 7425 | 5.0941 | 0.24 | | 0.0297 | 396.96 | 7443 | 5.1149 | 0.2367 | | 0.0242 | 397.97 | 7462 | 5.1468 | 0.24 | | 0.0188 | 398.99 | 7481 | 5.1389 | 0.24 | | 0.013 | 400.0 | 7500 | 5.1085 | 0.24 | | 0.0215 | 400.96 | 7518 | 5.1079 | 0.2417 | | 0.0246 | 401.97 | 7537 | 5.1225 | 0.2433 | | 0.0381 | 402.99 | 7556 | 5.1049 | 0.2333 | | 0.0268 | 404.0 | 7575 | 5.0964 | 0.2367 | | 0.022 | 404.96 | 7593 | 5.1123 | 0.2417 | | 0.0258 | 405.97 | 7612 | 5.1242 | 0.2367 | | 0.0234 | 406.99 | 7631 | 5.1538 | 0.2367 | | 0.0346 | 408.0 | 7650 | 5.1587 | 0.24 | | 0.0251 | 408.96 | 7668 | 5.1695 | 0.2367 | | 0.0319 | 409.97 | 7687 | 5.1242 | 0.2367 | | 0.0296 | 410.99 | 7706 | 5.1253 | 0.245 | | 0.0311 | 412.0 | 7725 | 5.1319 | 0.235 | | 0.0246 | 412.96 | 7743 | 5.1435 | 0.2383 | | 0.0148 | 413.97 | 7762 | 5.1314 | 0.2433 | | 0.0196 | 414.99 | 7781 | 5.1186 | 0.2433 | | 0.0206 | 416.0 | 7800 | 5.1153 | 0.24 | | 0.0377 | 416.96 | 7818 | 5.1252 | 0.23 | | 0.0246 | 417.97 | 7837 | 5.1409 | 0.235 | | 0.0115 | 418.99 | 7856 | 5.1439 | 0.235 | | 0.0139 | 420.0 | 7875 | 5.1273 | 0.2333 | | 0.0238 | 420.96 | 7893 | 5.1244 | 0.2333 | | 0.0228 | 421.97 | 7912 | 5.1404 | 0.2333 | | 0.0276 | 422.99 | 7931 | 5.1417 | 0.23 | | 0.0276 | 424.0 | 7950 | 5.1281 | 0.235 | | 0.0186 | 424.96 | 7968 | 5.1225 | 0.24 | | 0.0463 | 425.97 | 7987 | 5.1386 | 0.2333 | | 0.0192 | 426.99 | 8006 | 5.1411 | 0.2383 | | 0.0277 | 428.0 | 8025 | 5.1424 | 0.235 | | 0.0304 | 428.96 | 8043 | 5.1354 | 0.23 | | 0.0303 | 429.97 | 8062 | 5.1346 | 0.2383 | | 0.0236 | 430.99 | 8081 | 5.1426 | 0.2367 | | 0.0279 | 432.0 | 8100 | 5.1394 | 0.24 | | 0.0218 | 432.96 | 8118 | 5.1427 | 0.2383 | | 0.0194 | 433.97 | 8137 | 5.1346 | 0.235 | | 0.0116 | 434.99 | 8156 | 5.1279 | 0.2333 | | 0.0201 | 436.0 | 8175 | 5.1297 | 0.2333 | | 0.0319 | 436.96 | 8193 | 5.1245 | 0.235 | | 0.0225 | 437.97 | 8212 | 5.1166 | 0.2367 | | 0.0176 | 438.99 | 8231 | 5.1184 | 0.2333 | | 0.015 | 440.0 | 8250 | 5.1266 | 0.235 | | 0.0226 | 440.96 | 8268 | 5.1210 | 0.2317 | | 0.0158 | 441.97 | 8287 | 5.1206 | 0.2383 | | 0.0182 | 442.99 | 8306 | 5.1153 | 0.2367 | | 0.0318 | 444.0 | 8325 | 5.1076 | 0.235 | | 0.0283 | 444.96 | 8343 | 5.1095 | 0.2333 | | 0.0265 | 445.97 | 8362 | 5.1310 | 0.2333 | | 0.0187 | 446.99 | 8381 | 5.1357 | 0.235 | | 0.02 | 448.0 | 8400 | 5.1346 | 0.2283 | | 0.0238 | 448.96 | 8418 | 5.1394 | 0.23 | | 0.0176 | 449.97 | 8437 | 5.1368 | 0.2333 | | 0.0193 | 450.99 | 8456 | 5.1376 | 0.235 | | 0.034 | 452.0 | 8475 | 5.1447 | 0.2383 | | 0.0204 | 452.96 | 8493 | 5.1470 | 0.235 | | 0.0188 | 453.97 | 8512 | 5.1478 | 0.235 | | 0.0256 | 454.99 | 8531 | 5.1441 | 0.2367 | | 0.026 | 456.0 | 8550 | 5.1448 | 0.235 | | 0.0179 | 456.96 | 8568 | 5.1457 | 0.2317 | | 0.0233 | 457.97 | 8587 | 5.1456 | 0.235 | | 0.0186 | 458.99 | 8606 | 5.1381 | 0.235 | | 0.0203 | 460.0 | 8625 | 5.1357 | 0.2333 | | 0.0266 | 460.96 | 8643 | 5.1313 | 0.2417 | | 0.0242 | 461.97 | 8662 | 5.1319 | 0.2417 | | 0.0234 | 462.99 | 8681 | 5.1338 | 0.24 | | 0.0184 | 464.0 | 8700 | 5.1343 | 0.2417 | | 0.016 | 464.96 | 8718 | 5.1374 | 0.2383 | | 0.0357 | 465.97 | 8737 | 5.1385 | 0.24 | | 0.021 | 466.99 | 8756 | 5.1406 | 0.2417 | | 0.0209 | 468.0 | 8775 | 5.1426 | 0.2417 | | 0.0186 | 468.96 | 8793 | 5.1428 | 0.2417 | | 0.027 | 469.97 | 8812 | 5.1442 | 0.2417 | | 0.0146 | 470.99 | 8831 | 5.1449 | 0.2433 | | 0.0237 | 472.0 | 8850 | 5.1456 | 0.2433 | | 0.0147 | 472.96 | 8868 | 5.1467 | 0.245 | | 0.0268 | 473.97 | 8887 | 5.1457 | 0.245 | | 0.015 | 474.99 | 8906 | 5.1465 | 0.2433 | | 0.0116 | 476.0 | 8925 | 5.1462 | 0.2433 | | 0.0151 | 476.96 | 8943 | 5.1453 | 0.2433 | | 0.0225 | 477.97 | 8962 | 5.1453 | 0.245 | | 0.0267 | 478.99 | 8981 | 5.1456 | 0.245 | | 0.031 | 480.0 | 9000 | 5.1457 | 0.245 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.13.1 - Datasets 2.14.0 - Tokenizers 0.13.3
NasimB/gutenberg_fixed-log-rarity-seed
NasimB
2023-07-31T10:56:01Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-29T08:39:07Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gutenberg_fixed-log-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gutenberg_fixed-log-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3507 | 0.29 | 500 | 5.3376 | | 5.026 | 0.58 | 1000 | 4.9262 | | 4.7053 | 0.87 | 1500 | 4.6903 | | 4.4487 | 1.16 | 2000 | 4.5489 | | 4.2937 | 1.46 | 2500 | 4.4265 | | 4.1927 | 1.75 | 3000 | 4.3290 | | 4.0801 | 2.04 | 3500 | 4.2509 | | 3.8874 | 2.33 | 4000 | 4.2061 | | 3.8644 | 2.62 | 4500 | 4.1586 | | 3.8263 | 2.91 | 5000 | 4.1033 | | 3.6474 | 3.2 | 5500 | 4.0967 | | 3.5836 | 3.49 | 6000 | 4.0710 | | 3.5631 | 3.79 | 6500 | 4.0388 | | 3.4824 | 4.08 | 7000 | 4.0325 | | 3.3181 | 4.37 | 7500 | 4.0320 | | 3.3073 | 4.66 | 8000 | 4.0153 | | 3.2955 | 4.95 | 8500 | 4.0049 | | 3.1561 | 5.24 | 9000 | 4.0179 | | 3.1309 | 5.53 | 9500 | 4.0163 | | 3.1289 | 5.82 | 10000 | 4.0154 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Pranavagrl/gpt2-wikitext2
Pranavagrl
2023-07-31T10:55:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T06:50:55Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.5389 | 1.0 | 1141 | 5.2704 | | 5.1418 | 2.0 | 2282 | 5.0180 | | 4.9567 | 3.0 | 3423 | 4.9387 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
mok33/q-FrozenLake-v1-4x4-noSlippery
mok33
2023-07-31T10:53:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T10:53:06Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="mok33/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
shre-db/bert-finetuned-ner
shre-db
2023-07-31T10:52:33Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T10:26:52Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9313871020946726 - name: Recall type: recall value: 0.9503534163581285 - name: F1 type: f1 value: 0.9407746772178258 - name: Accuracy type: accuracy value: 0.9866515570730559 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0577 - Precision: 0.9314 - Recall: 0.9504 - F1: 0.9408 - Accuracy: 0.9867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0781 | 1.0 | 1756 | 0.0729 | 0.9083 | 0.9349 | 0.9214 | 0.9807 | | 0.0413 | 2.0 | 3512 | 0.0581 | 0.9196 | 0.9465 | 0.9328 | 0.9854 | | 0.0268 | 3.0 | 5268 | 0.0577 | 0.9314 | 0.9504 | 0.9408 | 0.9867 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
hicabi/Valorant_Astra_TR
hicabi
2023-07-31T10:47:32Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-31T10:28:34Z
--- license: openrail --- Astra'nın ses modelidir 600 epoch ve 10 dakikalık bir datasetten oluşmaktadır. Eğitim süresi:3 saat 25 dakika Eğitim (Tarain) Benim Tarafımdan yapılmıştır. Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir. Credits Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur. "Önerilen pitch değeri "-3" kendi sesinize göre kendiniz ayarlayabilirsiniz. Discord: .hicabi
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e3_s6789_v3_l6_r4
KingKazma
2023-07-31T10:44:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:44:29Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
minhalvp/sd-dreambooth-hillsroad
minhalvp
2023-07-31T10:37:04Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "base_model:finetune:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-16T22:31:09Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of Hills Road college tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - minhalvp/sd-dreambooth-hillsroad This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on photos of Hills Road college using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e2_s6789_v3_l6_r4
KingKazma
2023-07-31T10:36:51Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:36:50Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
c4big/xlm-roberta-base-finetuned-panx-de-fr
c4big
2023-07-31T10:33:43Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T10:22:01Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1658 - F1: 0.8588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2908 | 1.0 | 715 | 0.1909 | 0.8125 | | 0.1466 | 2.0 | 1430 | 0.1613 | 0.8492 | | 0.0945 | 3.0 | 2145 | 0.1658 | 0.8588 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
namec/llama2-qlora-finetunined-flc
namec
2023-07-31T10:29:35Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:29:29Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e1_s6789_v3_l6_r4
KingKazma
2023-07-31T10:29:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:29:12Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
cpierse/wav2vec2-large-xlsr-53-esperanto
cpierse
2023-07-31T10:27:11Z
5,661
6
transformers
[ "transformers", "pytorch", "jax", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "eo", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: eo datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Esperanto by Charles Pierse results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eo type: common_voice args: eo metrics: - name: Test WER type: wer value: 12.31 --- # Wav2Vec2-Large-XLSR-53-eo Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eo", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Esperanto test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=2000))) ``` **Test Result**: 12.31 % ## Training The Common Voice `train`, `validation` datasets were used for training.
FelixChao/vicuna-7b-instruct-ft-adapters-chemical
FelixChao
2023-07-31T10:26:00Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:25:57Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e0_s6789_v3_l6_r4
KingKazma
2023-07-31T10:21:33Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T10:21:32Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
runningsnake/distilbert-base-uncased-finetuned-imdb
runningsnake
2023-07-31T10:20:00Z
70
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-31T08:39:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: runningsnake/distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # runningsnake/distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8559 - Validation Loss: 2.5783 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8559 | 2.5783 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.2 - Tokenizers 0.13.3
digiplay/OrangeChillMix_v7fix
digiplay
2023-07-31T10:08:18Z
397
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-31T09:52:26Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/9486?modelVersionId=129974 Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6af60647-732f-4d14-8a4e-512acc1440f0/width=1024/20247604-1034985627-(masterpiece,%20best%20quality_1),%20(photorealistic_1.2),%20light,%20depth%20of%20field,%20(detailed%20face,%20face%20focus_1),%20game%20cg,%20ultra%20detail.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0946c9b3-e4fd-4a38-aa37-1c44e392b5cc/width=1024/20247628-300149503-best%20quality,%20masterpiece,%20(close-up,face%20focus_0.8),__sunglasses,%20necktie,%20animal%20ears,%20gloves,%201girl,%20black%20hair,%20shirt,%20suit,.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7e31b37b-ab22-4334-9992-3765655a2e45/width=1024/20247621-1274325725-masterpiece,%20best%20quality,%20lens%20flare,%20depth%20of%20field,(backlighting,%20Backlight_1.1),%20grating,raster,(Light%20through%20hair_1.2),_1g.jpeg)
sattebaaz/ppo-LunarLander-v2
sattebaaz
2023-07-31T10:07:40Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T10:07:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.03 +/- 14.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Marco-Cheung/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
Marco-Cheung
2023-07-31T09:58:56Z
164
0
transformers
[ "transformers", "pytorch", "tensorboard", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:MIT/ast-finetuned-audioset-10-10-0.4593", "base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593", "license:bsd-3-clause", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-31T09:38:22Z
--- license: bsd-3-clause base_model: MIT/ast-finetuned-audioset-10-10-0.4593 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: ast-finetuned-audioset-10-10-0.4593-finetuned2-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.93 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ast-finetuned-audioset-10-10-0.4593-finetuned2-gtzan This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.3235 - Accuracy: 0.93 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6202 | 0.99 | 28 | 0.6153 | 0.83 | | 0.3175 | 1.98 | 56 | 0.4811 | 0.84 | | 0.123 | 2.97 | 84 | 0.4716 | 0.85 | | 0.0279 | 4.0 | 113 | 0.4575 | 0.88 | | 0.0348 | 4.99 | 141 | 0.4270 | 0.88 | | 0.0331 | 5.98 | 169 | 0.3423 | 0.89 | | 0.0022 | 6.97 | 197 | 0.3178 | 0.94 | | 0.0009 | 8.0 | 226 | 0.4422 | 0.9 | | 0.0006 | 8.99 | 254 | 0.3187 | 0.92 | | 0.0005 | 9.91 | 280 | 0.3235 | 0.93 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.14.2 - Tokenizers 0.13.3
aroot/eng-fra-sent_delfy
aroot
2023-07-31T09:58:02Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-31T09:21:08Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-sent_delfy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-sent_delfy This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1613 - Bleu: 31.3966 - Chrf: 53.4743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
JinsooKim/Taxi
JinsooKim
2023-07-31T09:50:48Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T09:50:46Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="JinsooKim/Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
JinsooKim/q-FrozenLake-v1-4x4-noSlippery
JinsooKim
2023-07-31T09:39:12Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T09:39:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="JinsooKim/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
jkhan447/HateXplain-weighted-agreement-labeled
jkhan447
2023-07-31T09:38:17Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-31T08:42:29Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: HateXplain-weighted-agreement-labeled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HateXplain-weighted-agreement-labeled This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0701 - Accuracy: 0.6760 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
golaxy/gogpt2-13b-pretrain
golaxy
2023-07-31T09:36:19Z
10
3
transformers
[ "transformers", "pytorch", "tensorboard", "llama", "text-generation", "llama2", "chinese-llama2", "gogpt2-7b", "zh", "dataset:BelleGroup/train_0.5M_CN", "dataset:BelleGroup/train_1M_CN", "dataset:c-s-ale/alpaca-gpt4-data-zh", "dataset:BAAI/COIG", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-31T08:24:40Z
--- license: apache-2.0 datasets: - BelleGroup/train_0.5M_CN - BelleGroup/train_1M_CN - c-s-ale/alpaca-gpt4-data-zh - BAAI/COIG language: - zh tags: - llama2 - chinese-llama2 - gogpt2-7b --- # GoGPT2-13B: 基于Llama2-13b训练的中英文增强大模型 ![Chinese LLaMA2 7B](./assets/gogpt-banner-tou.png) <p align="center"> <img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square"> <img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca"> </p> > ICT中英文底座增强大模型:70亿参数、130亿参数 🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重 | 模型名称 | 基座模型 | 模型大小 | 下载地址 | |-------------------------------------------------------------|-----------|------|-------------------------------------------------| | [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) | | [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) | | [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | | [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | [GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT) ## 🚀step1:训练分词器 [🐱怎么从零到一训练一个LLM分词器](https://github.com/yanqiangmiffy/how-to-train-tokenizer) ```text ├── data │ └── corpus.txt 训练语料 ├── llama │ ├── tokenizer_checklist.chk │ └── tokenizer.model ├── merged_tokenizer_hf 合并结果 hf格式 │ ├── special_tokens_map.json │ ├── tokenizer_config.json │ └── tokenizer.model ├── merged_tokenizer_sp │ └── open_llama.model # ├── merge_tokenizer │ └── tokenizer.model ├── open_llama.model 训练的sp模型 ├── open_llama.vocab 训练的sp词汇表 ├── README.md ├── step0_step0_process_text.py 基于多分数据集准备训练语料 ├── step1_make_corpus.py 基于中文Wikipedia数据准备训练语料 ├── step2_train_tokenzier.py 训练分词器 ├── step3_tokenzier_segment.py 测试训练后的模型,包括编码和解码测试样例 └── step4_merge_tokenizers.py 与原版llama的分词器进行合并,得到hf格式的tokenizer ``` 原始llama2词表大小**32000**,与40k训练的中文分词模型合并之后词表大小为**68419**,sft添加pad字符之后大小为**68420** ## 🚀step2:二次预训练 > 在中文预训练语料上对LLaMA进行增量预训练、继续预训练,目前训练语料20GB,后续继续迭代更新 ## 🚀step3: 有监督微调 基于多样性的指令数据进行微调,包括belle,alpaca的中英文指令数据以及moss多轮对话数据,完成在120万+条数据的指令微调 - belle数据:120k数据 v1 - stanford_alapca:52k数据 v2 - stanford_alapca_gpt4_zh:52k数据 v2 - [sharegpt](data%2Ffinetune%2Fsharegpt):90k数据 - [fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data):moss多轮对话数据 根据长度(输出长度大约500)采样之后,筛选出11万指令数据进行sft训练 ## 测试效果 - 西瓜和南瓜有什么区别 ``` 1. 外观:西瓜通常是圆形或椭圆形的,有浅黄色或棕色的外皮,内部有红色或黄色的果肉。而南瓜则是一个圆形或椭圆形的,通常有浅黄色或棕色的外皮,内部有橙黄色的果肉。 2. 品种:西瓜主要分为三种品种:小型、中等和巨型。小型西瓜通常只有一个或两个果核,中等西瓜通常有两个或三个果核,巨型西瓜则有四个或更多果核。南瓜主要分为两种品种:南 瓜和萝卜南瓜。南瓜通常有两个或三个果核,萝卜南瓜则有四个或更多果核。 3. 生长环境:西瓜需要温暖的气候,并且需要充足的阳光。它们通常生长在热带或亚热带地区。南瓜则在温带和寒带地区生长。南瓜需要更长的冬季来生长,而西瓜通常在春季和夏季生 长。 4. 营养价值:西瓜和南瓜都是富含维生素C和纤维素的水果。西瓜还富含抗氧化剂和钾,而南瓜富含叶酸和维生素E。 总的来说,西瓜和南瓜是两种完全不同的果类,它们在外观、品种、生长环境和营养价值上有很多区别。 ``` - 如何提高我的时间管理技能? ``` Response: 1. 制定计划和目标:制定每周或每日的计划和目标,以帮助您更好地分配时间。 2. 优先考虑最重要的任务:优先完成最重要的任务,这有助于避免分心和浪费时间。 3. 避免拖延:尝试避免拖延,尽早开始任务,这样您可以更容易地完成它们。 4. 集中注意力:集中注意力,避免同时处理多个任务,这有助于减少分心和提高效率。5. 利用工具和技术:利用时间管理工具和技术,如日历、提醒和待办事项列表,以帮助您更好地管理时间。 6. 学会说“不”:学会拒绝不必要的事情,这有助于您更好地利用时间,并专注于最重要的任务。 7. 保持积极心态:保持积极心态,相信自己能够管理时间,并尝试不断学习和发展自己的时间管理技能。 8. 休息和放松:尝试保持工作和生活的平衡,并定期休息和放松,这有助于提高时间管理技能和提高工作效率。 ``` - Python和JavaScript编程语言的主要区别是什么? ``` 1. 语法不同:Python是一种面向对象的编程语言,它的语法比较严格,需要使用花括号来标识代码块。JavaScript是一种脚本语言,它的语法相对比较松散,需要使用花括号或缩进来标 识代码块。 2. 应用领域不同:Python广泛应用于数据分析、机器学习、人工智能等领域,由于其可读性好、语法清晰、功能强大等特点。JavaScript则主要应用于前端开发、后端开发、移动端开发 等领域。 3. 运行环境不同:Python的运行环境较为独立,可以在不同的操作系统上运行。JavaScript的运行环境相对较复杂,需要考虑浏览器和Web服务器的兼容性问题。 4. 应用场景不同:Python更适合进行科学计算、数据分析、机器学习等方面的应用,如Pandas、Numpy等库;而JavaScript则更适合进行前端开发、后端开发、移动端开发等领域的脚本编程,如React、Angular等框架。 5. 库和框架不同:Python拥有丰富的标准库和第三方库,如Django、Flask等;而JavaScript则拥有流行的Web开发框架,如React、Angular等。 ``` - AI会如何影响农业生产? ``` AI将有助于提高农业生产效率,优化作物种植和收成过程,以及提高农产品的质量和品质。 AI技术可以应用于植物监测、灌溉管理、病虫害防治、土壤分析等方面,减少人力 成本,提高生产效率,同时也可以降低农业生产中的资源浪费和环境污染。例如,AI可以监测土壤湿度、气象状况等环境因素,帮助农民决定何时种植、灌溉和施肥,从而提高农作物的产量和质量。 另外,AI还可以通过监测植物健康状况,及时发现和预防病虫害的发生,提高农作物的产量和质量。此外,AI还可以帮助农民预测未来天气、干旱和洪涝灾害,制定更有效的灾害应对计划。 总之,AI有望成为农业生产中不可或缺的技术,在未来为农业生产带来更大的改进和效益。 ``` - 翻译成英文:过去都是假的,回忆是一条没有归途的路,以往的一切春天都无法复原,即使最狂热最坚贞的爱情,归根结底也不过是一种瞬息即逝的现实,唯有孤独永恒。 ``` Response: Translation into English: The past is false, memories are a dead-end road, all previous springs are hopelessly incapable of being restored, even the most passionate and steadfast love is ultimately a fleeting reality, and only solitude is eternal ``` - 请用python编程语言实现快速排序算法的代码 ``` 快速排序(Quick Sort)是一种常用的排序算法,基本思想是通过一次排序将数组分为两个部分,其中一个部分的所有元素都比另一个小,然后再分别对这两个部分进行排序 ,直到整个数组有序。 以下是使用Python语言实现快速排序的代码: ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr)//2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ``` 在这个实现中,首先判断数组的长度是否小于等于1,如果是,则直接返回该数组。否则,选择一个枢轴(pivot)元素,将数组分为三个部分:小于枢轴的左部分、等于枢轴的中间部分、大于枢轴的右部分。接着递归地对左部分和右部分进行排序,并将它们与中间部分连接起来。最后返回排好序的数组。 以下是使用上述代码对数组进行排序的示例: ```python arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5] sorted_arr = quick_sort(arr) print(sorted_arr) ``` ``` 输出结果为:[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9] ``` ## 免责声明 本项目相关资源仅供学术研究之用,严禁用于商业用途。 使用涉及第三方代码的部分时,请严格遵循相应的开源协议。 模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目不对其准确性作出保证。 对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
JvThunder/poca-SoccerTwos
JvThunder
2023-07-31T09:24:16Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-31T09:23:52Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: JvThunder/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
zong21/llama2-qlora-finetunined-french
zong21
2023-07-31T09:23:45Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T09:23:41Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
undrwolf/SoccerTwos-RL
undrwolf
2023-07-31T09:19:36Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-31T09:09:57Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Agog/Soccer 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
NasimB/all-base-miss-wikipedia-seed
NasimB
2023-07-31T09:16:02Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-30T21:21:41Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: all-base-miss-wikipedia-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-base-miss-wikipedia-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.2034 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.2135 | 0.32 | 500 | 5.3711 | | 4.8943 | 0.64 | 1000 | 4.9556 | | 4.5853 | 0.97 | 1500 | 4.7291 | | 4.3145 | 1.29 | 2000 | 4.5936 | | 4.194 | 1.61 | 2500 | 4.4733 | | 4.092 | 1.93 | 3000 | 4.3689 | | 3.8879 | 2.26 | 3500 | 4.3218 | | 3.832 | 2.58 | 4000 | 4.2572 | | 3.788 | 2.9 | 4500 | 4.1919 | | 3.6023 | 3.22 | 5000 | 4.1896 | | 3.5435 | 3.54 | 5500 | 4.1565 | | 3.5215 | 3.87 | 6000 | 4.1203 | | 3.3703 | 4.19 | 6500 | 4.1263 | | 3.2774 | 4.51 | 7000 | 4.1145 | | 3.264 | 4.83 | 7500 | 4.1009 | | 3.1767 | 5.15 | 8000 | 4.1080 | | 3.0922 | 5.48 | 8500 | 4.1086 | | 3.0931 | 5.8 | 9000 | 4.1067 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Srivasanth23/Srivasanth_Portfolio
Srivasanth23
2023-07-31T09:04:39Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-07-30T08:18:37Z
--- license: mit --- # Sv_Portfolio <br>Created my own portfolio website using <b>JavaScript</b> and <b> CSS Flexbox </b> properties. <br> <b> Webiste Link ➡️ [here](https://jsrivportfolio.ccbp.tech/)</b> ## 🛠 Tools used | Tools |Applied | | ----------------- | ------------------------------------------------------------------ | | IDE | VSCode | | Frontend | JS, CSS, HTML, Bootstrap| ## ✍ Author [Jammula Srivasanth](https://github.com/srivasanth23) <br> ## 🛡 License You may use this project freely at your own risk. See [LICENSE](https://choosealicense.com/licenses/mit/). Copyright (c) 2023 Jammula Srivasanth <div align="center"> <h3 align="left">Connect with me </h3> <a href="https://www.github.com/srivasanth23" target="_blank"><img alt="Github" width="40px" src="https://cdn-icons-png.flaticon.com/512/733/733553.png"></a> &nbsp&nbsp&nbsp <a href="[https://www.linkedin.com/in/mahimachuri/](https://linkedin.com/in/https://www.linkedin.com/in/srivasanth-jammula)" target="_blank"><img alt="LinkedIn" width="40px" src="https://cdn-icons-png.flaticon.com/512/3536/3536505.png"></a> &nbsp&nbsp&nbsp <a href="mailto:[email protected]" target="_blank"><img alt="Gmail" width="40px" src="https://cdn-icons-png.flaticon.com/512/5968/5968534.png"></a>&nbsp&nbsp&nbsp </div>
SniiKz/Test
SniiKz
2023-07-31T08:52:47Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-31T07:42:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
karinthommen/whisper-V4-2
karinthommen
2023-07-31T08:36:55Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-23T10:49:23Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: whisper-V4-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-V4-2 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6144 - eval_wer: 63.1094 - eval_runtime: 963.3549 - eval_samples_per_second: 1.893 - eval_steps_per_second: 0.237 - epoch: 1.36 - step: 4000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/all-base-miss-switchboard-seed
NasimB
2023-07-31T08:30:01Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-31T05:10:45Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: all-base-miss-switchboard-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-base-miss-switchboard-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3572 | 0.29 | 500 | 5.3445 | | 5.0473 | 0.59 | 1000 | 4.9275 | | 4.702 | 0.88 | 1500 | 4.6874 | | 4.4496 | 1.18 | 2000 | 4.5481 | | 4.291 | 1.47 | 2500 | 4.4269 | | 4.2007 | 1.77 | 3000 | 4.3256 | | 4.0643 | 2.06 | 3500 | 4.2553 | | 3.8944 | 2.36 | 4000 | 4.2086 | | 3.867 | 2.65 | 4500 | 4.1498 | | 3.8222 | 2.95 | 5000 | 4.1040 | | 3.6099 | 3.24 | 5500 | 4.0985 | | 3.5862 | 3.54 | 6000 | 4.0672 | | 3.5652 | 3.83 | 6500 | 4.0345 | | 3.4416 | 4.12 | 7000 | 4.0403 | | 3.3122 | 4.42 | 7500 | 4.0320 | | 3.2986 | 4.71 | 8000 | 4.0164 | | 3.2906 | 5.01 | 8500 | 4.0097 | | 3.1267 | 5.3 | 9000 | 4.0182 | | 3.1245 | 5.6 | 9500 | 4.0176 | | 3.1197 | 5.89 | 10000 | 4.0167 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Kertn/dqn-SpaceInvadersNoFrameskip-v4
Kertn
2023-07-31T08:21:26Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T08:20:48Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 659.50 +/- 229.69 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kertn -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kertn -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Kertn ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
undrwolf/rl_course_vizdoom_health_gathering_supreme
undrwolf
2023-07-31T08:11:34Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T08:11:25Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.96 +/- 4.63 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r undrwolf/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
badokorach/bert-base-cased-finetuned
badokorach
2023-07-31T08:06:22Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-31T07:55:02Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 66 | 2.6893 | | No log | 2.0 | 132 | 2.4447 | | No log | 3.0 | 198 | 2.3882 | | No log | 4.0 | 264 | 2.3255 | | No log | 5.0 | 330 | 2.3217 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
a84654000/segformer-b0-scene-parse-150
a84654000
2023-07-31T08:04:57Z
38
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "generated_from_trainer", "dataset:scene_parse_150", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2023-07-31T01:15:21Z
--- license: other base_model: nvidia/mit-b0 tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: segformer-b0-scene-parse-150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-scene-parse-150 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 2.4633 - Mean Iou: 0.0740 - Mean Accuracy: 0.1308 - Overall Accuracy: 0.5950 - Per Category Iou: [0.6315283833574392, 0.7493853230906206, 0.6554280215090774, 0.45632227892398425, 0.4312924395347844, 0.48480907150773517, 0.3746061242627454, 0.0, 0.041317978566610875, 0.4152741131632952, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.038312985571587126, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.011153074471851765, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] - Per Category Accuracy: [0.9086752441858699, 0.8865245736940752, 0.9962788661486265, 0.6896770946464994, 0.95961611554542, 0.5357721272977569, 0.9621290724216642, nan, 0.04172868784216767, 0.8978233624800127, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03856638512758636, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.014385751255898919, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 4.9459 | 1.0 | 20 | 4.9076 | 0.0084 | 0.0381 | 0.1657 | [0.002198818081282831, 0.0639612359288392, 0.41750938967952755, 0.025532191914588893, 0.27589892048375314, 0.0003503079022087835, 0.006680875481596852, 0.0, 0.00021016129879682655, 0.011983062808750881, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0023758013352268956, 0.0, 0.0, 0.0, 0.0, 0.001327681551316719, 0.0, 0.0, 0.0, 0.0, 0.005020444076393561, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.05114544485881726, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.0022091198892401974, 0.06662903687539498, 0.9611229602500687, 0.02597282191200233, 0.6459837830989484, 0.00038464653008340755, 0.016912222452791035, nan, 0.0005045790549234302, 0.01223008110171567, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.011304787166856132, 0.0, 0.0, 0.0, 0.0, 0.012333107037791356, 0.0, 0.0, 0.0, nan, 0.014766326685949155, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.24742268041237114, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 4.6791 | 2.0 | 40 | 4.4056 | 0.0122 | 0.0521 | 0.2206 | [0.03737747000797027, 0.17818387061371582, 0.3595631519207338, 0.13786685117971004, 0.16152810751229438, 0.006663279378674602, 0.023055565144917172, 0.0, 0.046845376934655465, 0.11345278501328702, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.006318995051389418, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.006820583432066349, 0.0, 0.008015889905653686, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0] | [0.039180412180974226, 0.21485440511621123, 0.9902059970583289, 0.150424271321731, 0.9654440643608261, 0.013938375576969796, 0.06929416298580025, nan, 0.10285844034614124, 0.12423111828173843, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.006475774362175236, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.024100475220638153, 0.0, 0.06237924372067347, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 4.2926 | 3.0 | 60 | 4.0454 | 0.0239 | 0.0725 | 0.2798 | [0.19166760897834567, 0.18437050855410464, 0.45249877980967457, 0.13718558665023076, 0.1867481095755479, 0.011884391056321778, 0.017625191065374236, nan, 0.0034181419496307763, 0.31379092727486657, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.051986278632289475, 0.0, 0.0, 0.0008088358342165448, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0] | [0.2942621474468173, 0.21860568119824778, 0.9948747654270895, 0.17835289131413737, 0.9848283288990245, 0.052656085512997004, 0.04153203094892242, nan, 0.00534853798218836, 0.42259323816246275, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6483577146011592, 0.0, nan, 0.0022073374942913684, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 4.0781 | 4.0 | 80 | 3.9288 | 0.0321 | 0.0817 | 0.3488 | [0.25745835166198583, 0.24561740702566615, 0.5362931631869664, 0.2479292333245913, 0.20201337922972007, 0.041887813508169426, 0.013548209366391185, nan, 0.0, 0.3315631626201074, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.004842164074867306, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07793405579893936, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan] | [0.4458038657428003, 0.3140674203427732, 0.9957449875472816, 0.40066929794354234, 0.9851767388825542, 0.17926552757308284, 0.036447988616488305, nan, 0.0, 0.5000648237514225, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.005065263978180402, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.46646425614131937, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.4997 | 5.0 | 100 | 3.7433 | 0.0363 | 0.0825 | 0.3698 | [0.301265641230697, 0.19591198154088582, 0.5749046589406068, 0.2468705954058893, 0.190337676247041, 0.06181431337663733, 0.014534177047344731, nan, 0.0, 0.33849121860037384, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1437744884632129, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6016748514052216, 0.23503945726678083, 0.9943542337907784, 0.42916743927313183, 0.9906879513492969, 0.2387946392420439, 0.029244360122135594, nan, 0.0, 0.4878059320935191, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3646149599779189, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.6571 | 6.0 | 120 | 3.6933 | 0.0379 | 0.0841 | 0.3903 | [0.27050115162507093, 0.418611771409618, 0.4242192058227256, 0.33706302009126765, 0.21113489780056452, 0.014855439642324888, 0.09996437530614971, 0.0, 0.0, 0.2701552202856856, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.11232024873688301, 0.0, 0.0, 0.0032579760816877907, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4220460616687318, 0.5317251548853011, 0.9976082238659751, 0.5765366038868728, 0.9808691245407323, 0.037837071827678356, 0.19964130080336762, nan, 0.0, 0.46533369826704507, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2393044438310792, 0.0, nan, 0.006241437052823869, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 4.2113 | 7.0 | 140 | 3.2967 | 0.0537 | 0.1068 | 0.4920 | [0.3832462771307141, 0.6056219900416293, 0.6693028473628005, 0.25917605148241785, 0.28951161528237956, 0.21253476060579551, 0.0697991027341215, 0.0, 0.0, 0.46426851744234315, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.047925486875529215, 0.0, 0.0, 0.003422892896577107, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6408617737626867, 0.8015118533842506, 0.993785653080346, 0.39828525513672924, 0.9797288736855442, 0.8115333225362378, 0.1466664690362553, nan, 0.0, 0.8055863668448119, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07811206182721502, 0.0, nan, 0.003539351499467194, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.7713 | 8.0 | 160 | 3.3910 | 0.0498 | 0.1011 | 0.4833 | [0.45012040873885406, 0.5939750562368127, 0.4927852828999669, 0.3196728765775992, 0.22975316274370844, 0.0673900095812725, 0.23872675137969637, nan, 0.0, 0.2808427394542766, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0002234437145283103, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.057896218447968474, 0.0, 0.0, 0.009897869121170091, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6753461786349028, 0.8279355288246002, 0.9977844038044189, 0.4190087679796562, 0.9652223489167617, 0.13171106972224472, 0.7206741173331752, nan, 0.0, 0.44049899883317245, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0002234437145283103, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.17030085564449352, 0.0, nan, 0.011950068503577408, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.47 | 9.0 | 180 | 3.2553 | 0.0507 | 0.1020 | 0.4652 | [0.3960927295450915, 0.29292116497910103, 0.5599666054233271, 0.34019518550286354, 0.24424088294116195, 0.26136673911381486, 0.33925302087147563, nan, 0.0, 0.34721973132467354, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00858576114219564, 0.0, 0.0, 0.0012181539746623973, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8232313478018389, 0.353927091829294, 0.9990630430546398, 0.5095538308035991, 0.9399467882934246, 0.4844724269171593, 0.6866421604956571, nan, 0.0, 0.5856753914634322, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02456527739442451, 0.0, nan, 0.0013320140051758258, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 3.4743 | 10.0 | 200 | 3.0892 | 0.0604 | 0.1191 | 0.5029 | [0.44019479491044744, 0.3925511907334771, 0.6332025257447981, 0.36955289664244123, 0.20459874543231815, 0.3091969138656122, 0.3697238199230785, nan, 0.0, 0.3078507357524186, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0002681324574339724, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2934723970508901, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8383653380843155, 0.4611098807909818, 0.9971304025177715, 0.6189284169072784, 0.9700684150513113, 0.697323669932788, 0.787952450123025, nan, 0.0, 0.49093187743989397, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0002681324574339724, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.45045542368203145, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.4734 | 11.0 | 220 | 3.1171 | 0.0562 | 0.1133 | 0.4975 | [0.43775721430889214, 0.29701665624213985, 0.6076484925942728, 0.38344968923399103, 0.2959088235577711, 0.3207529050225679, 0.3502506806656421, 0.0, 0.0, 0.4433889768445222, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01063592081154757, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008250825082508251, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.848987776058609, 0.36037636993145616, 0.9975575053988474, 0.6806618809214767, 0.9720321804130242, 0.5409344886225605, 0.9019061453176415, nan, 0.0, 0.6898687679165646, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.01063592081154757, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0011040574109853713, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.5479 | 12.0 | 240 | 2.9291 | 0.0591 | 0.1172 | 0.5154 | [0.4404652452629144, 0.4201326685913847, 0.6420542991550833, 0.3854768052452929, 0.2938275455404918, 0.33192474595391236, 0.3181045331963529, 0.0, 0.0, 0.4522231594167972, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.023104080082227287, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8306659693544292, 0.5136655557776133, 0.9986199238155236, 0.5866069772986146, 0.9548650703154694, 0.7386326828083246, 0.9178993863575727, nan, 0.0, 0.6487561042365922, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.023104080082227287, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.5602 | 13.0 | 260 | 2.9052 | 0.0563 | 0.1128 | 0.4931 | [0.41252465974192803, 0.24890708954898938, 0.6433458086151749, 0.3781747497448932, 0.31518448196233384, 0.39076733846686057, 0.3954953344991767, 0.0, 0.0, 0.4239419236505134, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9170842221532626, 0.30876864160054446, 0.9975735217568877, 0.5333942588717296, 0.9726973267452173, 0.6174386590007288, 0.9184922775916758, nan, 0.0, 0.7125282703582593, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.0224 | 14.0 | 280 | 2.8455 | 0.0571 | 0.1134 | 0.4861 | [0.41127431359944366, 0.22875756330190888, 0.6348845019964922, 0.34427101006673455, 0.343832947904444, 0.33604537268160134, 0.3829669841833564, 0.0, 0.0, 0.5149523826932568, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00117724965050401, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8829969380469904, 0.2795659430583838, 0.9991164309147743, 0.5375045252664389, 0.9559736475357912, 0.6459328690582233, 0.8950730738446032, nan, 0.0, 0.8116365836442473, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.004416229643941485, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.5606 | 15.0 | 300 | 2.8936 | 0.0710 | 0.1268 | 0.5915 | [0.581974892515616, 0.7266865497024192, 0.6628950566362418, 0.40246908097786266, 0.29290940053847836, 0.5487152375830606, 0.2839975155055856, 0.0, 0.0, 0.46655556674693943, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.009563390981811682, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9314652020216269, 0.8857170635799435, 0.9974560684645918, 0.6401949617228683, 0.9751678702647916, 0.6093509595918698, 0.9352562772359411, nan, 0.0, 0.7339921347181607, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.009563390981811682, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.0911 | 16.0 | 320 | 2.9376 | 0.0609 | 0.1159 | 0.5235 | [0.4893722689285804, 0.4990153858505483, 0.5833670196577261, 0.34571611192200186, 0.34223067457209977, 0.302905604719764, 0.3278826270056503, 0.0, 0.0, 0.4604921178556681, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8916728332505094, 0.6043038398591314, 0.997744362909318, 0.47997404042277025, 0.9689598378309895, 0.41576240991173374, 0.9659235763199241, nan, 0.0, 0.8192857863121048, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.4103 | 17.0 | 340 | 2.7035 | 0.0672 | 0.1229 | 0.5548 | [0.5031723273010382, 0.5242504070113612, 0.6625065159343406, 0.3813163553285849, 0.3672406792615061, 0.44058937965804473, 0.32593422869838473, 0.0, 0.0, 0.488790247656228, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 4.468874290566207e-05, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.929855018369548, 0.6444173774015999, 0.9974320439275313, 0.601282085242775, 0.9740276194096035, 0.6510547412745972, 0.9654640856134942, nan, 0.0, 0.7502989095204483, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 4.468874290566207e-05, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.8824 | 18.0 | 360 | 2.8388 | 0.0623 | 0.1184 | 0.5445 | [0.5385965358173107, 0.4998599684125468, 0.6051496728863832, 0.47100788933198456, 0.3282502022108385, 0.20094285033527348, 0.40334746428010676, 0.0, 0.0, 0.44439705568750876, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.919729523867392, 0.5880699157920891, 0.9982755721176562, 0.7461877389561424, 0.9640504244267072, 0.23902745161551542, 0.9519313431950909, nan, 0.0, 0.870568576326366, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.9148 | 19.0 | 380 | 2.7592 | 0.0674 | 0.1231 | 0.5629 | [0.5309221472516241, 0.6042422940607883, 0.6300551315531983, 0.47773874658976223, 0.4151296010400585, 0.3079344296984004, 0.4118536418729994, 0.0, 0.0, 0.46133017146413047, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8634989594567909, 0.7597211794508931, 0.9969488837933143, 0.7042948089675328, 0.9709236031927024, 0.4713742003401085, 0.9650045949070643, nan, 0.0, 0.790273556231003, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.8379 | 20.0 | 400 | 2.7689 | 0.0729 | 0.1266 | 0.5901 | [0.600936226692451, 0.7101621307352481, 0.6226424211724284, 0.49680035171706316, 0.40172194889700047, 0.4005570162982236, 0.4199102088619949, 0.0, 0.0, 0.4285786938484899, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9137401614957782, 0.8708929063342281, 0.9974774236086457, 0.7183915657863368, 0.9620866590649944, 0.4425661996922828, 0.9565707171019476, nan, 0.0, 0.8497673547587836, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.3068 | 21.0 | 420 | 2.5974 | 0.0714 | 0.1282 | 0.5873 | [0.5499119585322783, 0.6686000856653341, 0.653631715211489, 0.4494904462410312, 0.3662745842779314, 0.45534304393382624, 0.32308960612637133, 0.0, 0.0, 0.521783153123026, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.008133351208830495, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9399913631658554, 0.8094114088485824, 0.9967326629597696, 0.6632318790672211, 0.9774166983403015, 0.6246355980241315, 0.9642931254261405, nan, 0.0, 0.8091588758120976, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.008133351208830495, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.416 | 22.0 | 440 | 2.7277 | 0.0722 | 0.1269 | 0.5787 | [0.5731659552699311, 0.6186096585665949, 0.6478464198422398, 0.47522631749110894, 0.39273474982864975, 0.4916258134301017, 0.4310509364189431, 0.0, 0.0, 0.4849205366357069, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9336439410872429, 0.7603936544288824, 0.9962201395024786, 0.6748916143501718, 0.9800456100342075, 0.5704814154992307, 0.9640856134942045, nan, 0.0, 0.8461084141229347, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.9987 | 23.0 | 460 | 2.7165 | 0.0629 | 0.1186 | 0.5250 | [0.4848263327282701, 0.3724171848727395, 0.6080556975710466, 0.4748946424664035, 0.4259264380530973, 0.288456428795723, 0.4432566862921564, 0.0, 0.0, 0.4841253324083709, 0.0, 0.0028968879965581527, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0004027386226339106, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8937582597864228, 0.45400973333261313, 0.9968901571471663, 0.7377906104032564, 0.9756429747877866, 0.3626406996517937, 0.9563335606083064, nan, 0.0, 0.9047667065212694, 0.0, 0.0028968879965581527, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0004186329730552596, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.2144 | 24.0 | 480 | 2.8007 | 0.0625 | 0.1191 | 0.5486 | [0.5505687062236131, 0.5659588600465885, 0.6028418069419714, 0.48742597515859953, 0.4368439945437576, 0.1348447483979795, 0.34005380565064014, 0.0, 0.0, 0.3809495640239935, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.457236948309819e-05, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8880293044744443, 0.6581342465309473, 0.9959505308087994, 0.7649157196718851, 0.9636386671734448, 0.1488885739736011, 0.9630035869919663, nan, 0.0, 0.9276710986905602, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 7.611508601004719e-05, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4025 | 25.0 | 500 | 2.7005 | 0.0607 | 0.1163 | 0.5181 | [0.48952894733795627, 0.3485206228240284, 0.6221423867832523, 0.48041956172375583, 0.3023796778432041, 0.2635948462421157, 0.418421306882226, 0.0, 0.0, 0.4376373560508587, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.036221527467619474, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00022484541877459247, 0.0, 0.0, 0.0006316251990992475, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9176310769784969, 0.4149926810956211, 0.9961881067863978, 0.7140075759582528, 0.984036488027366, 0.34645517855696817, 0.9628850087451457, nan, 0.0, 0.7888618389778015, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03624257049649193, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0022081148219707425, 0.0, nan, 0.0008753234891155427, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.283 | 26.0 | 520 | 2.7524 | 0.0666 | 0.1225 | 0.5677 | [0.593920280282248, 0.6526017306164915, 0.6120064107606387, 0.47675676563950736, 0.4194037462268962, 0.22801527727882925, 0.3341724807595483, 0.0, 0.0, 0.4096124745049016, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0001340662287169862, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0044651523975927, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8917162344271153, 0.75892177144493, 0.9969141816842269, 0.7685094434584514, 0.9637970353477765, 0.24413920155478178, 0.9641004357750571, nan, 0.0, 0.896829398291534, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0001340662287169862, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.007002587912924341, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.4599 | 27.0 | 540 | 2.6070 | 0.0735 | 0.1289 | 0.5919 | [0.604230873789186, 0.7161707586182151, 0.6480319274351257, 0.4537693544789698, 0.3768998823298686, 0.5294753086419753, 0.3198438995655695, 0.0, 0.0, 0.4512605570671406, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.01622201367475533, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9164223542100226, 0.8624748159471094, 0.997792411983439, 0.6826353385782276, 0.9739325985050045, 0.5904020568467082, 0.9657753535113983, nan, 0.0, 0.8266468834180843, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.01622201367475533, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.0417 | 28.0 | 560 | 2.5571 | 0.0751 | 0.1287 | 0.5865 | [0.5725593302428561, 0.639865060956051, 0.6626661742983752, 0.4669925366669379, 0.471018284799606, 0.5105995935861145, 0.3664175710652346, 0.0, 0.0, 0.49413268880482, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.017554830928663955, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0016871535309713184, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8978835416228134, 0.791948665042644, 0.9963669561178483, 0.7853699239755239, 0.9693082478145192, 0.5824459470402462, 0.9673465152817715, nan, 0.0, 0.8128322217260404, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.017562675961925192, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0018648196072461561, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.9107 | 29.0 | 580 | 2.6493 | 0.0720 | 0.1256 | 0.5791 | [0.5996198759050133, 0.6966705394190872, 0.6258245625796072, 0.4664159303583287, 0.40074155987770765, 0.3773166783528536, 0.33500339150274405, 0.0, 0.0, 0.48497500265929155, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.02332022873481058, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.021031783457512027, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8797418498015481, 0.8502028227744859, 0.996796728391931, 0.7101092244797047, 0.9756429747877866, 0.4412098145598834, 0.9663089556220912, nan, 0.0, 0.7881271697950128, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0233275237967556, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.025460496270360785, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.7054 | 30.0 | 600 | 2.7311 | 0.0654 | 0.1222 | 0.5497 | [0.5579012154450761, 0.5397148251054099, 0.6122901623661692, 0.4638145721313974, 0.395516717325228, 0.3063827968021889, 0.30556116151257584, 0.0, 0.0, 0.4216891342037293, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03985895375825745, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.018007739972708766, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8832226241653411, 0.6284832313367938, 0.9977817344114122, 0.7525186970764571, 0.9561953629798555, 0.3627115555915459, 0.9650045949070643, nan, 0.0, 0.8584249268932137, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03990704741475622, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.030636322119043994, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.678 | 31.0 | 620 | 2.5508 | 0.0746 | 0.1300 | 0.5906 | [0.5813742491778146, 0.6672101477367631, 0.6534874896614191, 0.4628225999381495, 0.43004071441744335, 0.5045781765959587, 0.3436497153038081, 0.0, 0.0, 0.4942340209671965, 0.0, 8.603383997705764e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.10261061946902655, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00928347280334728, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9413953912290562, 0.8134138502838438, 0.9976082238659751, 0.7003699681244647, 0.9601545673381477, 0.5583650498016034, 0.9634630776983962, nan, 0.0, 0.8421037468128322, 0.0, 8.60461781155887e-05, 0.0, nan, 0.0, 0.0, 0.0, 0.10363319479823033, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.010808342213426702, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4533 | 32.0 | 640 | 2.6416 | 0.0697 | 0.1254 | 0.5624 | [0.5759254470889608, 0.6242126400814036, 0.6146013258977776, 0.47069141297317446, 0.3870515158858717, 0.35451020359414936, 0.37172370378848574, 0.0, 0.0030424178421462876, 0.4560364906832298, 0.0, 0.00017205781142463867, 0.0, nan, 0.0, 0.0, 0.0, 0.033501360574563946, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.012930530549255267, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8313517079448024, 0.7488481502887051, 0.9973119212422288, 0.7582315700246351, 0.9611681236538705, 0.4506842659324642, 0.9627812527791777, nan, 0.003052703282286752, 0.8799752229216785, 0.0, 0.0001720923562311774, 0.0, nan, 0.0, 0.0, 0.0, 0.03356124592215221, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.021045821281778048, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.833 | 33.0 | 660 | 2.5283 | 0.0767 | 0.1295 | 0.5932 | [0.6249007833924871, 0.723794631106035, 0.657849325523638, 0.45269961883380416, 0.3837782034378965, 0.5221816676700782, 0.3927187664335495, 0.0, 0.0005795494632868013, 0.47574492819142494, 0.0, 0.005926137990266247, 0.0, nan, 0.0, 0.0, 0.0, 0.04205461754354702, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.011021606094667166, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9311179926087796, 0.8762889103744794, 0.9962628497905861, 0.6480887923498715, 0.9772900038008362, 0.5618876022349988, 0.9630184092728189, nan, 0.0005802659131619447, 0.8489174433512439, 0.0, 0.0059371862899756205, 0.0, nan, 0.0, 0.0, 0.0, 0.04218617330294499, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.013434312680773329, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.6139 | 34.0 | 680 | 2.5741 | 0.0702 | 0.1262 | 0.5759 | [0.5981913850198078, 0.6460515703850497, 0.6366097441641223, 0.4644849291955748, 0.3768828169152517, 0.41010966322252307, 0.40982488921433713, 0.0, 0.004789754966219623, 0.4308480768820682, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.012976588628762541, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00847145488029466, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.914540913204157, 0.7571879202968612, 0.9968234223219982, 0.7268063539155696, 0.9716204231597618, 0.45841768564256213, 0.9622921175110426, nan, 0.004793501021772586, 0.8731471211051729, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.013004424185547661, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.010503881869386512, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.3061 | 35.0 | 700 | 2.5719 | 0.0697 | 0.1277 | 0.5788 | [0.5877744723440256, 0.6545176272144612, 0.6371764175849305, 0.4643631440311253, 0.40732009429815796, 0.46453736703212994, 0.3478200447910375, 0.0, 0.002527084844997123, 0.4250541952450336, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01230933904201231, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04167467398764585, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9226851439942537, 0.7691979453053684, 0.9966766057066284, 0.695023531385482, 0.9686431014823261, 0.5185136448295409, 0.9645451042006344, nan, 0.0025481242273633223, 0.8614788458491192, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.01233409304196273, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.05777135028162582, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 0.7975 | 36.0 | 720 | 2.6546 | 0.0682 | 0.1254 | 0.5657 | [0.5918937942909384, 0.6189392588700067, 0.6217792125669784, 0.46519101063529633, 0.3449943757030371, 0.4014752567693744, 0.34029783699927396, 0.0, 0.004622070386093597, 0.41730973023335205, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.035229145325791654, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.00019281444822932066, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.047625140751356335, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.916042593914721, 0.7125372021800073, 0.9974240357485111, 0.6915225203747362, 0.9714303813505638, 0.4352376710664831, 0.9656567752645777, nan, 0.004642127305295557, 0.8452296921592072, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03534879563837869, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0007966011683483802, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.07082508753234891, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.0253 | 37.0 | 740 | 2.5874 | 0.0646 | 0.1196 | 0.5564 | [0.5571062567421791, 0.6058075592034933, 0.6042839906712254, 0.4444211320112143, 0.4079824690335253, 0.19747695293546821, 0.4123352368411056, 0.0, 0.0021149101163200562, 0.466731583267612, 0.0, 0.0008885831398515206, 0.0, 0.0, 0.0, 0.0, 0.0, 0.023439240675549217, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02604284467995735, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9324221979657868, 0.7223866650102356, 0.9973653091023632, 0.6452588452403026, 0.9670910933738756, 0.21422787270224308, 0.9653751519283787, nan, 0.0021192320306784064, 0.8354629136115472, 0.0, 0.0008891438405277498, 0.0, nan, 0.0, 0.0, 0.0, 0.023506278768378245, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.030674379662049018, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.838 | 38.0 | 760 | 2.5218 | 0.0722 | 0.1281 | 0.5785 | [0.6030889424134764, 0.6868089267628261, 0.6450410489460966, 0.45388455262790706, 0.37492913308190984, 0.44814043386934493, 0.3919288521798982, 0.0, 0.012789266061961355, 0.44635695187165775, 0.0, 0.010410708156961447, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02393154775168234, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 9.770395701025892e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.019292253040032792, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8921958174286105, 0.796655989888569, 0.9968634632170991, 0.7036987982658296, 0.9635436462688458, 0.5139788646853997, 0.962173539264222, nan, 0.012841536947801296, 0.8849594491421657, 0.0, 0.010440269611358096, 0.0, nan, 0.0, 0.0, 0.0, 0.023997854940340527, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0005310674455655868, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.026868625361546657, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.0149 | 39.0 | 780 | 2.5343 | 0.0745 | 0.1305 | 0.5885 | [0.6268673489494815, 0.7226258595312339, 0.6523668690767096, 0.4401050634918032, 0.3762038473854036, 0.47428914916482995, 0.35560324819650824, 0.0, 0.0326779423888875, 0.5035370345715173, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.13227536486724106, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.006425628727787001, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8836544658725698, 0.8724539125080346, 0.9962788661486265, 0.7108994905212224, 0.9675345242620043, 0.5271277026479877, 0.9651824622772952, nan, 0.03299947019199233, 0.817225831544678, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.13446842740313714, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.007954026488049932, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.425 | 40.0 | 800 | 2.5719 | 0.0742 | 0.1305 | 0.5857 | [0.6268883260447221, 0.7157151319930394, 0.6591611206242166, 0.4566983876095204, 0.3956926991380206, 0.5457506824385805, 0.39736071277877316, 0.0, 0.037740558620173226, 0.4232924885982746, 0.0, 0.0022058612885667633, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02267967740498151, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.022583335907813033, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8899866975393703, 0.8353057465552537, 0.996738001745783, 0.6786398594297722, 0.9654440643608261, 0.6071139363511215, 0.9631518098004921, nan, 0.038146176552211315, 0.8877684783704749, 0.0, 0.0022085185716334433, 0.0, nan, 0.0, 0.0, 0.0, 0.02274657013898199, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.02782006393667225, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.285 | 41.0 | 820 | 2.5094 | 0.0757 | 0.1304 | 0.5922 | [0.6248336582567084, 0.73480194008611, 0.6648635550519196, 0.44114087264510304, 0.44443632430536273, 0.5240662511646419, 0.36215199104821605, 0.0, 0.01839924106251248, 0.4524998888279501, 0.0, 0.0020356671827513047, 0.0, 0.0, 0.0, 0.0, 0.0, 0.027041475475564663, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.016650838873061097, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9027726841674678, 0.8743606086303656, 0.996100016817176, 0.6774478380263658, 0.9631002153807171, 0.5864341242205847, 0.9642338363027303, nan, 0.0185937381739284, 0.87949984874458, 0.0, 0.002036426215402266, 0.0, nan, 0.0, 0.0, 0.0, 0.027126066943736874, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.020018267620642412, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4964 | 42.0 | 840 | 2.5342 | 0.0728 | 0.1298 | 0.5902 | [0.6299243749207625, 0.7274993001072031, 0.6500369098772224, 0.45786503611885254, 0.4195937280211824, 0.44985542256872, 0.3516321998087076, 0.0, 0.01809795959292124, 0.4350510498237522, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05599397056215641, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025490853737196523, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.8949257514371215, 0.8632229106013385, 0.9966525811695679, 0.728956407335788, 0.9637020144431775, 0.4976313871568548, 0.9645154596389293, nan, 0.018215303882735828, 0.8587274377331855, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.056441882289851183, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03760085248896331, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.5299 | 43.0 | 860 | 2.4899 | 0.0741 | 0.1315 | 0.5961 | [0.6241980974941703, 0.7340298944632085, 0.659435583428851, 0.4666935202493979, 0.4040057262532807, 0.5014312173766627, 0.3521695311992024, 0.0, 0.017958222712017233, 0.43265441632865165, 0.0, 0.00028676301904106445, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08022618837250398, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.022486772486772486, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.914894632793495, 0.866315215219, 0.9967006302436889, 0.7263516198246404, 0.9653807170910934, 0.5425945420681837, 0.9633741440132808, nan, 0.01808915911900497, 0.8635388006165459, 0.0, 0.000286820593718629, 0.0, nan, 0.0, 0.0, 0.0, 0.08115475711668231, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.031054955092099254, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.5831 | 44.0 | 880 | 2.4619 | 0.0770 | 0.1330 | 0.6006 | [0.6294593595104173, 0.7550026626711556, 0.6629018445673919, 0.4493634466351563, 0.4239424444382477, 0.5168062586286241, 0.40319254658385095, 0.0, 0.033315811869540184, 0.46458514021393466, 0.0, 0.007167431192660551, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09325169588582503, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02449368450167491, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9160317436205696, 0.9036281240378746, 0.9959745553458599, 0.6929794354233442, 0.9630685417458508, 0.5683759818608795, 0.962173539264222, nan, 0.03357973610515427, 0.8796727120817067, 0.0, 0.007170514842965725, 0.0, nan, 0.0, 0.0, 0.0, 0.09460606873128659, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.030331861775003806, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 2.0182 | 45.0 | 900 | 2.4538 | 0.0742 | 0.1310 | 0.5950 | [0.6366997315549165, 0.7454096724268982, 0.6588926532925389, 0.45470443049883236, 0.41670902771099233, 0.4709076370939249, 0.36651739385462256, 0.0, 0.031284468063772185, 0.44408959632186584, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05615388024642113, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02401001365498407, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9032913282279083, 0.8793109967213469, 0.9962388252535256, 0.7255083750540825, 0.9606930191308755, 0.5126730909385375, 0.963225921204755, nan, 0.03148573302722204, 0.8793702012417349, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.05662063726147384, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03212056629623992, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.068 | 46.0 | 920 | 2.4727 | 0.0734 | 0.1304 | 0.5921 | [0.6252763224506702, 0.7295099749817672, 0.6590111186459835, 0.4541864535259626, 0.42925116120992407, 0.4485741000467508, 0.38551347756115234, 0.0, 0.03373343335833959, 0.42189059423299974, 0.0, 0.0010897619730427302, 0.0, 0.0, 0.0, 0.0, 0.0, 0.033709362231302185, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03366017461383479, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9139701877317894, 0.8536516201515635, 0.9961347189262634, 0.7195085339902696, 0.9600912200684151, 0.48561624423030203, 0.9622180061067798, nan, 0.03403385725458536, 0.9012374133882655, 0.0, 0.0010899182561307902, 0.0, nan, 0.0, 0.0, 0.0, 0.033918755865397505, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.04768610138529457, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.7218 | 47.0 | 940 | 2.4942 | 0.0736 | 0.1315 | 0.5910 | [0.6344980782931207, 0.730630542377467, 0.6530170302682582, 0.4579966759367441, 0.40218974236767546, 0.45531387665198236, 0.3592299569311064, 0.0, 0.03703611145820317, 0.4287874478902939, 0.0, 0.0018638527269599129, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07314695848655355, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.002101317582916856, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.034548537515896566, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9006373462784576, 0.8460005293377337, 0.9965431360562922, 0.7372519933246802, 0.9622133536044597, 0.5021762895781035, 0.9630776983962293, nan, 0.037389307969826174, 0.8904910759302208, 0.0, 0.0018643338591710886, 0.0, nan, 0.0, 0.0, 0.0, 0.07378111453724806, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.009824747742963356, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.04962703607855077, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.5948 | 48.0 | 960 | 2.4895 | 0.0749 | 0.1322 | 0.5942 | [0.6353298682726948, 0.73886245174192, 0.6574809251731353, 0.4549829228334213, 0.42416524391950516, 0.4857871984888114, 0.3732243012136815, 0.0, 0.04916724842924105, 0.4285023186382579, 0.0, 0.005729017473503294, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05258249322733934, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.038890181780570424, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9024081142839783, 0.8621372281067534, 0.9964683930521039, 0.7233936407865575, 0.9600278727986824, 0.5414507247550409, 0.962662674532357, nan, 0.04975149481545021, 0.893170457655685, 0.0, 0.00573641187437258, 0.0, nan, 0.0, 0.0, 0.0, 0.052911471600303885, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.05723854467955549, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.4873 | 49.0 | 980 | 2.4963 | 0.0756 | 0.1304 | 0.5901 | [0.6257310633689352, 0.7403080385528501, 0.6581441080766484, 0.44244872919035655, 0.42954927944108956, 0.4991443082698465, 0.37064548888052123, 0.0, 0.04863570609068688, 0.42574620159539905, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03986666666666667, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.028792637901581528, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9177851511554478, 0.8741526545207063, 0.9961400577122768, 0.6304424606853681, 0.9620233117952616, 0.5491234108024942, 0.9629442978685561, nan, 0.04919645785503444, 0.8872354830810009, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.04008580238637887, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.039770132440249656, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | | 1.1392 | 50.0 | 1000 | 2.4633 | 0.0740 | 0.1308 | 0.5950 | [0.6315283833574392, 0.7493853230906206, 0.6554280215090774, 0.45632227892398425, 0.4312924395347844, 0.48480907150773517, 0.3746061242627454, 0.0, 0.041317978566610875, 0.4152741131632952, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.038312985571587126, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.011153074471851765, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.9086752441858699, 0.8865245736940752, 0.9962788661486265, 0.6896770946464994, 0.95961611554542, 0.5357721272977569, 0.9621290724216642, nan, 0.04172868784216767, 0.8978233624800127, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.03856638512758636, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.014385751255898919, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
Dhlora/Tun1
Dhlora
2023-07-31T08:02:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-31T07:59:13Z
--- license: creativeml-openrail-m ---
martinwunderlich/distilbert-base-uncased-finetuned-clinc
martinwunderlich
2023-07-31T07:54:26Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-31T07:50:16Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - clinc_oos model-index: - name: distilbert-base-uncased-finetuned-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
marferrerferrer/ppo-LunarLander-v2
marferrerferrer
2023-07-31T07:50:05Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T07:49:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 251.74 +/- 25.36 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
deepvk/roberta-base
deepvk
2023-07-31T07:39:33Z
149
2
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "feature-extraction", "ru", "en", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-02-07T14:49:12Z
--- license: apache-2.0 language: - ru - en library_name: transformers pipeline_tag: feature-extraction --- # RoBERTa-base <!-- Provide a quick summary of what the model is/does. --> Pretrained bidirectional encoder for russian language. The model was trained using standard MLM objective on large text corpora including open social data. See `Training Details` section for more information. ⚠️ This model contains only the encoder part without any pretrained head. - **Developed by:** [deepvk](https://vk.com/deepvk) - **Model type:** RoBERTa - **Languages:** Mostly russian and small fraction of other languages - **License:** Apache 2.0 ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("deepvk/roberta-base") model = AutoModel.from_pretrained("deepvk/roberta-base") text = "Привет, мир!" inputs = tokenizer(text, return_tensors='pt') predictions = model(**inputs) ``` ## Training Details ### Training Data 500 GB of raw text in total. A mix of the following data: Wikipedia, Books, Twitter comments, Pikabu, Proza.ru, Film subtitles, News websites, and Social corpus. ### Training Hyperparameters | Argument | Value | |--------------------|----------------------| | Training regime | fp16 mixed precision | | Training framework | Fairseq | | Optimizer | Adam | | Adam betas | 0.9,0.98 | | Adam eps | 1e-6 | | Num training steps | 500k | The model was trained on a machine with 8xA100 for approximately 22 days. ### Architecture details | Argument | Value | |-------------------------|----------------| |Encoder layers | 12 | |Encoder attention heads | 12 | |Encoder embed dim | 768 | |Encoder ffn embed dim | 3,072 | |Activation function | GeLU | |Attention dropout | 0.1 | |Dropout | 0.1 | |Max positions | 512 | |Vocab size | 50266 | |Tokenizer type | Byte-level BPE | ## Evaluation We evaluated the model on [Russian Super Glue](https://russiansuperglue.com/) dev set. The best result in each task is marked in bold. All models have the same size except the distilled version of DeBERTa. | Model | RCB | PARus | MuSeRC | TERRa | RUSSE | RWSD | DaNetQA | Score | |------------------------------------------------------------------------|-----------|--------|---------|-------|---------|---------|---------|-----------| | [vk-deberta-distill](https://huggingface.co/deepvk/deberta-v1-distill) | 0.433 | 0.56 | 0.625 | 0.59 | 0.943 | 0.569 | 0.726 | 0.635 | | [vk-roberta-base](https://huggingface.co/deepvk/roberta-base) | 0.46 | 0.56 | 0.679 | 0.769 | 0.960 | 0.569 | 0.658 | 0.665 | | [vk-deberta-base](https://huggingface.co/deepvk/deberta-v1-base) | 0.450 |**0.61**|**0.722**| 0.704 | 0.948 | 0.578 |**0.76** |**0.682** | | [vk-bert-base](https://huggingface.co/deepvk/bert-base-uncased) | 0.467 | 0.57 | 0.587 | 0.704 | 0.953 |**0.583**| 0.737 | 0.657 | | [sber-bert-base](https://huggingface.co/ai-forever/ruBert-base) | **0.491** |**0.61**| 0.663 | 0.769 |**0.962**| 0.574 | 0.678 | 0.678 |
Pashtu/llama2-d
Pashtu
2023-07-31T07:22:04Z
4
0
peft
[ "peft", "region:us" ]
null
2023-07-31T07:22:02Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
pradeepiisc/xlm-roberta-base-finetuned-panx-all
pradeepiisc
2023-07-31T07:19:08Z
104
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T07:04:55Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1751 - F1: 0.8549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2967 | 1.0 | 835 | 0.1900 | 0.8219 | | 0.1576 | 2.0 | 1670 | 0.1685 | 0.8470 | | 0.1014 | 3.0 | 2505 | 0.1751 | 0.8549 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.10.1 - Tokenizers 0.13.3
muhtasham/bert-tiny-finetuned-legal-definitions-longer-downstream-alt
muhtasham
2023-07-31T07:18:41Z
159
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-05T10:48:10Z
--- tags: - generated_from_trainer model-index: - name: bert-tiny-finetuned-legal-definitions-longer-downstream-alt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-tiny-finetuned-legal-definitions-longer-downstream-alt This model is a fine-tuned version of [muhtasham/bert-tiny-finetuned-legal-definitions-longer](https://huggingface.co/muhtasham/bert-tiny-finetuned-legal-definitions-longer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2301 - Micro f1: 0.0 - Macro f1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4096 - eval_batch_size: 1024 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:| | 0.6908 | 1.0 | 3 | 0.6878 | 0.1142 | 0.0900 | | 0.6871 | 2.0 | 6 | 0.6838 | 0.1152 | 0.0834 | | 0.6833 | 3.0 | 9 | 0.6799 | 0.1161 | 0.0763 | | 0.6799 | 4.0 | 12 | 0.6759 | 0.1150 | 0.0697 | | 0.6761 | 5.0 | 15 | 0.6720 | 0.1135 | 0.0617 | | 0.6723 | 6.0 | 18 | 0.6681 | 0.1100 | 0.0513 | | 0.6688 | 7.0 | 21 | 0.6642 | 0.1069 | 0.0442 | | 0.6652 | 8.0 | 24 | 0.6604 | 0.1019 | 0.0379 | | 0.6617 | 9.0 | 27 | 0.6566 | 0.0940 | 0.0296 | | 0.658 | 10.0 | 30 | 0.6528 | 0.0835 | 0.0252 | | 0.6544 | 11.0 | 33 | 0.6491 | 0.0696 | 0.0218 | | 0.6511 | 12.0 | 36 | 0.6455 | 0.0463 | 0.0169 | | 0.6475 | 13.0 | 39 | 0.6418 | 0.0287 | 0.0131 | | 0.6441 | 14.0 | 42 | 0.6383 | 0.0133 | 0.0085 | | 0.6407 | 15.0 | 45 | 0.6348 | 0.0044 | 0.0031 | | 0.6375 | 16.0 | 48 | 0.6313 | 0.0012 | 0.0010 | | 0.634 | 17.0 | 51 | 0.6279 | 0.0004 | 0.0003 | | 0.6308 | 18.0 | 54 | 0.6246 | 0.0 | 0.0 | | 0.6276 | 19.0 | 57 | 0.6213 | 0.0 | 0.0 | | 0.6244 | 20.0 | 60 | 0.6180 | 0.0 | 0.0 | | 0.6212 | 21.0 | 63 | 0.6147 | 0.0 | 0.0 | | 0.6179 | 22.0 | 66 | 0.6115 | 0.0 | 0.0 | | 0.6148 | 23.0 | 69 | 0.6083 | 0.0 | 0.0 | | 0.6116 | 24.0 | 72 | 0.6051 | 0.0 | 0.0 | | 0.6085 | 25.0 | 75 | 0.6019 | 0.0 | 0.0 | | 0.6055 | 26.0 | 78 | 0.5987 | 0.0 | 0.0 | | 0.6024 | 27.0 | 81 | 0.5955 | 0.0 | 0.0 | | 0.5992 | 28.0 | 84 | 0.5923 | 0.0 | 0.0 | | 0.5961 | 29.0 | 87 | 0.5891 | 0.0 | 0.0 | | 0.593 | 30.0 | 90 | 0.5859 | 0.0 | 0.0 | | 0.5898 | 31.0 | 93 | 0.5828 | 0.0 | 0.0 | | 0.5866 | 32.0 | 96 | 0.5796 | 0.0 | 0.0 | | 0.5835 | 33.0 | 99 | 0.5764 | 0.0 | 0.0 | | 0.5805 | 34.0 | 102 | 0.5732 | 0.0 | 0.0 | | 0.5775 | 35.0 | 105 | 0.5700 | 0.0 | 0.0 | | 0.5739 | 36.0 | 108 | 0.5668 | 0.0 | 0.0 | | 0.5709 | 37.0 | 111 | 0.5636 | 0.0 | 0.0 | | 0.5677 | 38.0 | 114 | 0.5604 | 0.0 | 0.0 | | 0.5647 | 39.0 | 117 | 0.5572 | 0.0 | 0.0 | | 0.5614 | 40.0 | 120 | 0.5540 | 0.0 | 0.0 | | 0.5582 | 41.0 | 123 | 0.5508 | 0.0 | 0.0 | | 0.5549 | 42.0 | 126 | 0.5476 | 0.0 | 0.0 | | 0.552 | 43.0 | 129 | 0.5444 | 0.0 | 0.0 | | 0.5486 | 44.0 | 132 | 0.5412 | 0.0 | 0.0 | | 0.5458 | 45.0 | 135 | 0.5381 | 0.0 | 0.0 | | 0.5426 | 46.0 | 138 | 0.5349 | 0.0 | 0.0 | | 0.539 | 47.0 | 141 | 0.5317 | 0.0 | 0.0 | | 0.5359 | 48.0 | 144 | 0.5285 | 0.0 | 0.0 | | 0.5327 | 49.0 | 147 | 0.5254 | 0.0 | 0.0 | | 0.5295 | 50.0 | 150 | 0.5222 | 0.0 | 0.0 | | 0.5268 | 51.0 | 153 | 0.5191 | 0.0 | 0.0 | | 0.5235 | 52.0 | 156 | 0.5159 | 0.0 | 0.0 | | 0.5203 | 53.0 | 159 | 0.5128 | 0.0 | 0.0 | | 0.5173 | 54.0 | 162 | 0.5097 | 0.0 | 0.0 | | 0.514 | 55.0 | 165 | 0.5066 | 0.0 | 0.0 | | 0.511 | 56.0 | 168 | 0.5035 | 0.0 | 0.0 | | 0.5079 | 57.0 | 171 | 0.5005 | 0.0 | 0.0 | | 0.505 | 58.0 | 174 | 0.4974 | 0.0 | 0.0 | | 0.5018 | 59.0 | 177 | 0.4944 | 0.0 | 0.0 | | 0.4987 | 60.0 | 180 | 0.4913 | 0.0 | 0.0 | | 0.4956 | 61.0 | 183 | 0.4883 | 0.0 | 0.0 | | 0.4927 | 62.0 | 186 | 0.4854 | 0.0 | 0.0 | | 0.4899 | 63.0 | 189 | 0.4824 | 0.0 | 0.0 | | 0.4866 | 64.0 | 192 | 0.4794 | 0.0 | 0.0 | | 0.4841 | 65.0 | 195 | 0.4765 | 0.0 | 0.0 | | 0.4811 | 66.0 | 198 | 0.4736 | 0.0 | 0.0 | | 0.4779 | 67.0 | 201 | 0.4707 | 0.0 | 0.0 | | 0.475 | 68.0 | 204 | 0.4679 | 0.0 | 0.0 | | 0.4721 | 69.0 | 207 | 0.4651 | 0.0 | 0.0 | | 0.4695 | 70.0 | 210 | 0.4623 | 0.0 | 0.0 | | 0.4669 | 71.0 | 213 | 0.4595 | 0.0 | 0.0 | | 0.4636 | 72.0 | 216 | 0.4567 | 0.0 | 0.0 | | 0.4612 | 73.0 | 219 | 0.4540 | 0.0 | 0.0 | | 0.4583 | 74.0 | 222 | 0.4513 | 0.0 | 0.0 | | 0.4554 | 75.0 | 225 | 0.4486 | 0.0 | 0.0 | | 0.4527 | 76.0 | 228 | 0.4459 | 0.0 | 0.0 | | 0.4501 | 77.0 | 231 | 0.4433 | 0.0 | 0.0 | | 0.4473 | 78.0 | 234 | 0.4407 | 0.0 | 0.0 | | 0.4451 | 79.0 | 237 | 0.4381 | 0.0 | 0.0 | | 0.4425 | 80.0 | 240 | 0.4356 | 0.0 | 0.0 | | 0.4397 | 81.0 | 243 | 0.4331 | 0.0 | 0.0 | | 0.4375 | 82.0 | 246 | 0.4306 | 0.0 | 0.0 | | 0.4348 | 83.0 | 249 | 0.4281 | 0.0 | 0.0 | | 0.4322 | 84.0 | 252 | 0.4257 | 0.0 | 0.0 | | 0.4297 | 85.0 | 255 | 0.4233 | 0.0 | 0.0 | | 0.4273 | 86.0 | 258 | 0.4209 | 0.0 | 0.0 | | 0.4247 | 87.0 | 261 | 0.4186 | 0.0 | 0.0 | | 0.4223 | 88.0 | 264 | 0.4162 | 0.0 | 0.0 | | 0.4202 | 89.0 | 267 | 0.4139 | 0.0 | 0.0 | | 0.4177 | 90.0 | 270 | 0.4117 | 0.0 | 0.0 | | 0.4157 | 91.0 | 273 | 0.4094 | 0.0 | 0.0 | | 0.4134 | 92.0 | 276 | 0.4072 | 0.0 | 0.0 | | 0.411 | 93.0 | 279 | 0.4050 | 0.0 | 0.0 | | 0.4088 | 94.0 | 282 | 0.4029 | 0.0 | 0.0 | | 0.4066 | 95.0 | 285 | 0.4008 | 0.0 | 0.0 | | 0.4045 | 96.0 | 288 | 0.3987 | 0.0 | 0.0 | | 0.4025 | 97.0 | 291 | 0.3966 | 0.0 | 0.0 | | 0.4 | 98.0 | 294 | 0.3945 | 0.0 | 0.0 | | 0.3984 | 99.0 | 297 | 0.3925 | 0.0 | 0.0 | | 0.3963 | 100.0 | 300 | 0.3905 | 0.0 | 0.0 | | 0.394 | 101.0 | 303 | 0.3885 | 0.0 | 0.0 | | 0.3921 | 102.0 | 306 | 0.3866 | 0.0 | 0.0 | | 0.3905 | 103.0 | 309 | 0.3847 | 0.0 | 0.0 | | 0.3888 | 104.0 | 312 | 0.3828 | 0.0 | 0.0 | | 0.3864 | 105.0 | 315 | 0.3809 | 0.0 | 0.0 | | 0.3842 | 106.0 | 318 | 0.3791 | 0.0 | 0.0 | | 0.3825 | 107.0 | 321 | 0.3773 | 0.0 | 0.0 | | 0.3808 | 108.0 | 324 | 0.3755 | 0.0 | 0.0 | | 0.3792 | 109.0 | 327 | 0.3737 | 0.0 | 0.0 | | 0.3774 | 110.0 | 330 | 0.3720 | 0.0 | 0.0 | | 0.3755 | 111.0 | 333 | 0.3703 | 0.0 | 0.0 | | 0.3739 | 112.0 | 336 | 0.3686 | 0.0 | 0.0 | | 0.372 | 113.0 | 339 | 0.3669 | 0.0 | 0.0 | | 0.3705 | 114.0 | 342 | 0.3653 | 0.0 | 0.0 | | 0.3691 | 115.0 | 345 | 0.3636 | 0.0 | 0.0 | | 0.3671 | 116.0 | 348 | 0.3620 | 0.0 | 0.0 | | 0.3655 | 117.0 | 351 | 0.3604 | 0.0 | 0.0 | | 0.3639 | 118.0 | 354 | 0.3589 | 0.0 | 0.0 | | 0.3626 | 119.0 | 357 | 0.3573 | 0.0 | 0.0 | | 0.3611 | 120.0 | 360 | 0.3558 | 0.0 | 0.0 | | 0.3595 | 121.0 | 363 | 0.3543 | 0.0 | 0.0 | | 0.3575 | 122.0 | 366 | 0.3529 | 0.0 | 0.0 | | 0.3559 | 123.0 | 369 | 0.3514 | 0.0 | 0.0 | | 0.3542 | 124.0 | 372 | 0.3500 | 0.0 | 0.0 | | 0.3534 | 125.0 | 375 | 0.3486 | 0.0 | 0.0 | | 0.3518 | 126.0 | 378 | 0.3472 | 0.0 | 0.0 | | 0.3506 | 127.0 | 381 | 0.3458 | 0.0 | 0.0 | | 0.349 | 128.0 | 384 | 0.3444 | 0.0 | 0.0 | | 0.3474 | 129.0 | 387 | 0.3431 | 0.0 | 0.0 | | 0.3464 | 130.0 | 390 | 0.3418 | 0.0 | 0.0 | | 0.3449 | 131.0 | 393 | 0.3405 | 0.0 | 0.0 | | 0.3437 | 132.0 | 396 | 0.3392 | 0.0 | 0.0 | | 0.342 | 133.0 | 399 | 0.3379 | 0.0 | 0.0 | | 0.3407 | 134.0 | 402 | 0.3367 | 0.0 | 0.0 | | 0.34 | 135.0 | 405 | 0.3355 | 0.0 | 0.0 | | 0.3384 | 136.0 | 408 | 0.3342 | 0.0 | 0.0 | | 0.337 | 137.0 | 411 | 0.3330 | 0.0 | 0.0 | | 0.3359 | 138.0 | 414 | 0.3319 | 0.0 | 0.0 | | 0.3343 | 139.0 | 417 | 0.3307 | 0.0 | 0.0 | | 0.3338 | 140.0 | 420 | 0.3296 | 0.0 | 0.0 | | 0.3323 | 141.0 | 423 | 0.3284 | 0.0 | 0.0 | | 0.3316 | 142.0 | 426 | 0.3273 | 0.0 | 0.0 | | 0.3306 | 143.0 | 429 | 0.3262 | 0.0 | 0.0 | | 0.3288 | 144.0 | 432 | 0.3251 | 0.0 | 0.0 | | 0.3281 | 145.0 | 435 | 0.3241 | 0.0 | 0.0 | | 0.3267 | 146.0 | 438 | 0.3230 | 0.0 | 0.0 | | 0.3255 | 147.0 | 441 | 0.3220 | 0.0 | 0.0 | | 0.3247 | 148.0 | 444 | 0.3209 | 0.0 | 0.0 | | 0.3239 | 149.0 | 447 | 0.3199 | 0.0 | 0.0 | | 0.3229 | 150.0 | 450 | 0.3189 | 0.0 | 0.0 | | 0.3209 | 151.0 | 453 | 0.3179 | 0.0 | 0.0 | | 0.3199 | 152.0 | 456 | 0.3170 | 0.0 | 0.0 | | 0.3194 | 153.0 | 459 | 0.3160 | 0.0 | 0.0 | | 0.3188 | 154.0 | 462 | 0.3151 | 0.0 | 0.0 | | 0.3176 | 155.0 | 465 | 0.3141 | 0.0 | 0.0 | | 0.3168 | 156.0 | 468 | 0.3132 | 0.0 | 0.0 | | 0.3155 | 157.0 | 471 | 0.3123 | 0.0 | 0.0 | | 0.3147 | 158.0 | 474 | 0.3114 | 0.0 | 0.0 | | 0.3136 | 159.0 | 477 | 0.3105 | 0.0 | 0.0 | | 0.3131 | 160.0 | 480 | 0.3097 | 0.0 | 0.0 | | 0.3121 | 161.0 | 483 | 0.3088 | 0.0 | 0.0 | | 0.3113 | 162.0 | 486 | 0.3079 | 0.0 | 0.0 | | 0.3097 | 163.0 | 489 | 0.3071 | 0.0 | 0.0 | | 0.3097 | 164.0 | 492 | 0.3063 | 0.0 | 0.0 | | 0.3089 | 165.0 | 495 | 0.3055 | 0.0 | 0.0 | | 0.3078 | 166.0 | 498 | 0.3047 | 0.0 | 0.0 | | 0.3071 | 167.0 | 501 | 0.3039 | 0.0 | 0.0 | | 0.3064 | 168.0 | 504 | 0.3031 | 0.0 | 0.0 | | 0.3057 | 169.0 | 507 | 0.3023 | 0.0 | 0.0 | | 0.3047 | 170.0 | 510 | 0.3015 | 0.0 | 0.0 | | 0.3037 | 171.0 | 513 | 0.3008 | 0.0 | 0.0 | | 0.3034 | 172.0 | 516 | 0.3001 | 0.0 | 0.0 | | 0.3021 | 173.0 | 519 | 0.2993 | 0.0 | 0.0 | | 0.3016 | 174.0 | 522 | 0.2986 | 0.0 | 0.0 | | 0.3005 | 175.0 | 525 | 0.2979 | 0.0 | 0.0 | | 0.3003 | 176.0 | 528 | 0.2972 | 0.0 | 0.0 | | 0.2997 | 177.0 | 531 | 0.2965 | 0.0 | 0.0 | | 0.2985 | 178.0 | 534 | 0.2958 | 0.0 | 0.0 | | 0.2978 | 179.0 | 537 | 0.2951 | 0.0 | 0.0 | | 0.2975 | 180.0 | 540 | 0.2945 | 0.0 | 0.0 | | 0.2968 | 181.0 | 543 | 0.2938 | 0.0 | 0.0 | | 0.296 | 182.0 | 546 | 0.2931 | 0.0 | 0.0 | | 0.2951 | 183.0 | 549 | 0.2925 | 0.0 | 0.0 | | 0.2948 | 184.0 | 552 | 0.2919 | 0.0 | 0.0 | | 0.2942 | 185.0 | 555 | 0.2912 | 0.0 | 0.0 | | 0.2935 | 186.0 | 558 | 0.2906 | 0.0 | 0.0 | | 0.293 | 187.0 | 561 | 0.2900 | 0.0 | 0.0 | | 0.2921 | 188.0 | 564 | 0.2894 | 0.0 | 0.0 | | 0.2917 | 189.0 | 567 | 0.2888 | 0.0 | 0.0 | | 0.2908 | 190.0 | 570 | 0.2882 | 0.0 | 0.0 | | 0.2901 | 191.0 | 573 | 0.2876 | 0.0 | 0.0 | | 0.2893 | 192.0 | 576 | 0.2871 | 0.0 | 0.0 | | 0.2892 | 193.0 | 579 | 0.2865 | 0.0 | 0.0 | | 0.2887 | 194.0 | 582 | 0.2859 | 0.0 | 0.0 | | 0.2873 | 195.0 | 585 | 0.2854 | 0.0 | 0.0 | | 0.2877 | 196.0 | 588 | 0.2848 | 0.0 | 0.0 | | 0.2866 | 197.0 | 591 | 0.2843 | 0.0 | 0.0 | | 0.2865 | 198.0 | 594 | 0.2838 | 0.0 | 0.0 | | 0.2855 | 199.0 | 597 | 0.2832 | 0.0 | 0.0 | | 0.2851 | 200.0 | 600 | 0.2827 | 0.0 | 0.0 | | 0.285 | 201.0 | 603 | 0.2822 | 0.0 | 0.0 | | 0.2842 | 202.0 | 606 | 0.2817 | 0.0 | 0.0 | | 0.2837 | 203.0 | 609 | 0.2812 | 0.0 | 0.0 | | 0.2831 | 204.0 | 612 | 0.2807 | 0.0 | 0.0 | | 0.2825 | 205.0 | 615 | 0.2802 | 0.0 | 0.0 | | 0.2822 | 206.0 | 618 | 0.2797 | 0.0 | 0.0 | | 0.2821 | 207.0 | 621 | 0.2792 | 0.0 | 0.0 | | 0.2811 | 208.0 | 624 | 0.2788 | 0.0 | 0.0 | | 0.2807 | 209.0 | 627 | 0.2783 | 0.0 | 0.0 | | 0.2802 | 210.0 | 630 | 0.2779 | 0.0 | 0.0 | | 0.2799 | 211.0 | 633 | 0.2774 | 0.0 | 0.0 | | 0.2793 | 212.0 | 636 | 0.2769 | 0.0 | 0.0 | | 0.279 | 213.0 | 639 | 0.2765 | 0.0 | 0.0 | | 0.2783 | 214.0 | 642 | 0.2761 | 0.0 | 0.0 | | 0.2776 | 215.0 | 645 | 0.2756 | 0.0 | 0.0 | | 0.2777 | 216.0 | 648 | 0.2752 | 0.0 | 0.0 | | 0.2768 | 217.0 | 651 | 0.2748 | 0.0 | 0.0 | | 0.2758 | 218.0 | 654 | 0.2744 | 0.0 | 0.0 | | 0.2762 | 219.0 | 657 | 0.2739 | 0.0 | 0.0 | | 0.2754 | 220.0 | 660 | 0.2735 | 0.0 | 0.0 | | 0.2756 | 221.0 | 663 | 0.2731 | 0.0 | 0.0 | | 0.2749 | 222.0 | 666 | 0.2727 | 0.0 | 0.0 | | 0.2749 | 223.0 | 669 | 0.2723 | 0.0 | 0.0 | | 0.2743 | 224.0 | 672 | 0.2719 | 0.0 | 0.0 | | 0.2738 | 225.0 | 675 | 0.2716 | 0.0 | 0.0 | | 0.2731 | 226.0 | 678 | 0.2712 | 0.0 | 0.0 | | 0.2726 | 227.0 | 681 | 0.2708 | 0.0 | 0.0 | | 0.2726 | 228.0 | 684 | 0.2704 | 0.0 | 0.0 | | 0.2718 | 229.0 | 687 | 0.2701 | 0.0 | 0.0 | | 0.2717 | 230.0 | 690 | 0.2697 | 0.0 | 0.0 | | 0.2719 | 231.0 | 693 | 0.2693 | 0.0 | 0.0 | | 0.2715 | 232.0 | 696 | 0.2690 | 0.0 | 0.0 | | 0.2706 | 233.0 | 699 | 0.2686 | 0.0 | 0.0 | | 0.2701 | 234.0 | 702 | 0.2683 | 0.0 | 0.0 | | 0.2702 | 235.0 | 705 | 0.2679 | 0.0 | 0.0 | | 0.2698 | 236.0 | 708 | 0.2676 | 0.0 | 0.0 | | 0.2689 | 237.0 | 711 | 0.2673 | 0.0 | 0.0 | | 0.2696 | 238.0 | 714 | 0.2669 | 0.0 | 0.0 | | 0.269 | 239.0 | 717 | 0.2666 | 0.0 | 0.0 | | 0.2675 | 240.0 | 720 | 0.2663 | 0.0 | 0.0 | | 0.268 | 241.0 | 723 | 0.2660 | 0.0 | 0.0 | | 0.2675 | 242.0 | 726 | 0.2656 | 0.0 | 0.0 | | 0.267 | 243.0 | 729 | 0.2653 | 0.0 | 0.0 | | 0.2669 | 244.0 | 732 | 0.2650 | 0.0 | 0.0 | | 0.266 | 245.0 | 735 | 0.2647 | 0.0 | 0.0 | | 0.2668 | 246.0 | 738 | 0.2644 | 0.0 | 0.0 | | 0.2661 | 247.0 | 741 | 0.2641 | 0.0 | 0.0 | | 0.2655 | 248.0 | 744 | 0.2638 | 0.0 | 0.0 | | 0.2651 | 249.0 | 747 | 0.2635 | 0.0 | 0.0 | | 0.2653 | 250.0 | 750 | 0.2632 | 0.0 | 0.0 | | 0.2643 | 251.0 | 753 | 0.2629 | 0.0 | 0.0 | | 0.264 | 252.0 | 756 | 0.2626 | 0.0 | 0.0 | | 0.264 | 253.0 | 759 | 0.2624 | 0.0 | 0.0 | | 0.2639 | 254.0 | 762 | 0.2621 | 0.0 | 0.0 | | 0.2643 | 255.0 | 765 | 0.2618 | 0.0 | 0.0 | | 0.2637 | 256.0 | 768 | 0.2615 | 0.0 | 0.0 | | 0.2634 | 257.0 | 771 | 0.2613 | 0.0 | 0.0 | | 0.2628 | 258.0 | 774 | 0.2610 | 0.0 | 0.0 | | 0.2619 | 259.0 | 777 | 0.2607 | 0.0 | 0.0 | | 0.2624 | 260.0 | 780 | 0.2605 | 0.0 | 0.0 | | 0.2614 | 261.0 | 783 | 0.2602 | 0.0 | 0.0 | | 0.2611 | 262.0 | 786 | 0.2599 | 0.0 | 0.0 | | 0.2612 | 263.0 | 789 | 0.2597 | 0.0 | 0.0 | | 0.2612 | 264.0 | 792 | 0.2594 | 0.0 | 0.0 | | 0.2611 | 265.0 | 795 | 0.2592 | 0.0 | 0.0 | | 0.2612 | 266.0 | 798 | 0.2589 | 0.0 | 0.0 | | 0.2601 | 267.0 | 801 | 0.2587 | 0.0 | 0.0 | | 0.2606 | 268.0 | 804 | 0.2585 | 0.0 | 0.0 | | 0.2603 | 269.0 | 807 | 0.2582 | 0.0 | 0.0 | | 0.2609 | 270.0 | 810 | 0.2580 | 0.0 | 0.0 | | 0.2593 | 271.0 | 813 | 0.2578 | 0.0 | 0.0 | | 0.2586 | 272.0 | 816 | 0.2575 | 0.0 | 0.0 | | 0.2592 | 273.0 | 819 | 0.2573 | 0.0 | 0.0 | | 0.2586 | 274.0 | 822 | 0.2571 | 0.0 | 0.0 | | 0.2583 | 275.0 | 825 | 0.2568 | 0.0 | 0.0 | | 0.2573 | 276.0 | 828 | 0.2566 | 0.0 | 0.0 | | 0.2578 | 277.0 | 831 | 0.2564 | 0.0 | 0.0 | | 0.2581 | 278.0 | 834 | 0.2562 | 0.0 | 0.0 | | 0.2574 | 279.0 | 837 | 0.2560 | 0.0 | 0.0 | | 0.2575 | 280.0 | 840 | 0.2558 | 0.0 | 0.0 | | 0.2574 | 281.0 | 843 | 0.2555 | 0.0 | 0.0 | | 0.257 | 282.0 | 846 | 0.2553 | 0.0 | 0.0 | | 0.2569 | 283.0 | 849 | 0.2551 | 0.0 | 0.0 | | 0.2569 | 284.0 | 852 | 0.2549 | 0.0 | 0.0 | | 0.2557 | 285.0 | 855 | 0.2547 | 0.0 | 0.0 | | 0.2559 | 286.0 | 858 | 0.2545 | 0.0 | 0.0 | | 0.2555 | 287.0 | 861 | 0.2543 | 0.0 | 0.0 | | 0.2557 | 288.0 | 864 | 0.2541 | 0.0 | 0.0 | | 0.2552 | 289.0 | 867 | 0.2539 | 0.0 | 0.0 | | 0.2555 | 290.0 | 870 | 0.2537 | 0.0 | 0.0 | | 0.2554 | 291.0 | 873 | 0.2535 | 0.0 | 0.0 | | 0.2547 | 292.0 | 876 | 0.2534 | 0.0 | 0.0 | | 0.2547 | 293.0 | 879 | 0.2532 | 0.0 | 0.0 | | 0.2546 | 294.0 | 882 | 0.2530 | 0.0 | 0.0 | | 0.2543 | 295.0 | 885 | 0.2528 | 0.0 | 0.0 | | 0.2542 | 296.0 | 888 | 0.2526 | 0.0 | 0.0 | | 0.254 | 297.0 | 891 | 0.2524 | 0.0 | 0.0 | | 0.2535 | 298.0 | 894 | 0.2523 | 0.0 | 0.0 | | 0.2537 | 299.0 | 897 | 0.2521 | 0.0 | 0.0 | | 0.2536 | 300.0 | 900 | 0.2519 | 0.0 | 0.0 | | 0.2533 | 301.0 | 903 | 0.2517 | 0.0 | 0.0 | | 0.2526 | 302.0 | 906 | 0.2516 | 0.0 | 0.0 | | 0.2525 | 303.0 | 909 | 0.2514 | 0.0 | 0.0 | | 0.2532 | 304.0 | 912 | 0.2512 | 0.0 | 0.0 | | 0.2526 | 305.0 | 915 | 0.2510 | 0.0 | 0.0 | | 0.2516 | 306.0 | 918 | 0.2509 | 0.0 | 0.0 | | 0.2523 | 307.0 | 921 | 0.2507 | 0.0 | 0.0 | | 0.2522 | 308.0 | 924 | 0.2506 | 0.0 | 0.0 | | 0.2517 | 309.0 | 927 | 0.2504 | 0.0 | 0.0 | | 0.2513 | 310.0 | 930 | 0.2502 | 0.0 | 0.0 | | 0.2515 | 311.0 | 933 | 0.2501 | 0.0 | 0.0 | | 0.2518 | 312.0 | 936 | 0.2499 | 0.0 | 0.0 | | 0.2506 | 313.0 | 939 | 0.2498 | 0.0 | 0.0 | | 0.2501 | 314.0 | 942 | 0.2496 | 0.0 | 0.0 | | 0.2508 | 315.0 | 945 | 0.2495 | 0.0 | 0.0 | | 0.2508 | 316.0 | 948 | 0.2493 | 0.0 | 0.0 | | 0.2506 | 317.0 | 951 | 0.2492 | 0.0 | 0.0 | | 0.2509 | 318.0 | 954 | 0.2490 | 0.0 | 0.0 | | 0.2504 | 319.0 | 957 | 0.2489 | 0.0 | 0.0 | | 0.2501 | 320.0 | 960 | 0.2487 | 0.0 | 0.0 | | 0.2499 | 321.0 | 963 | 0.2486 | 0.0 | 0.0 | | 0.2495 | 322.0 | 966 | 0.2484 | 0.0 | 0.0 | | 0.2491 | 323.0 | 969 | 0.2483 | 0.0 | 0.0 | | 0.2492 | 324.0 | 972 | 0.2482 | 0.0 | 0.0 | | 0.2497 | 325.0 | 975 | 0.2480 | 0.0 | 0.0 | | 0.2495 | 326.0 | 978 | 0.2479 | 0.0 | 0.0 | | 0.2488 | 327.0 | 981 | 0.2477 | 0.0 | 0.0 | | 0.249 | 328.0 | 984 | 0.2476 | 0.0 | 0.0 | | 0.2488 | 329.0 | 987 | 0.2475 | 0.0 | 0.0 | | 0.2492 | 330.0 | 990 | 0.2473 | 0.0 | 0.0 | | 0.2483 | 331.0 | 993 | 0.2472 | 0.0 | 0.0 | | 0.248 | 332.0 | 996 | 0.2471 | 0.0 | 0.0 | | 0.248 | 333.0 | 999 | 0.2469 | 0.0 | 0.0 | | 0.2482 | 334.0 | 1002 | 0.2468 | 0.0 | 0.0 | | 0.248 | 335.0 | 1005 | 0.2467 | 0.0 | 0.0 | | 0.2479 | 336.0 | 1008 | 0.2466 | 0.0 | 0.0 | | 0.2471 | 337.0 | 1011 | 0.2464 | 0.0 | 0.0 | | 0.2479 | 338.0 | 1014 | 0.2463 | 0.0 | 0.0 | | 0.2474 | 339.0 | 1017 | 0.2462 | 0.0 | 0.0 | | 0.2478 | 340.0 | 1020 | 0.2461 | 0.0 | 0.0 | | 0.2471 | 341.0 | 1023 | 0.2460 | 0.0 | 0.0 | | 0.2465 | 342.0 | 1026 | 0.2458 | 0.0 | 0.0 | | 0.2463 | 343.0 | 1029 | 0.2457 | 0.0 | 0.0 | | 0.2475 | 344.0 | 1032 | 0.2456 | 0.0 | 0.0 | | 0.2463 | 345.0 | 1035 | 0.2455 | 0.0 | 0.0 | | 0.2464 | 346.0 | 1038 | 0.2454 | 0.0 | 0.0 | | 0.2469 | 347.0 | 1041 | 0.2453 | 0.0 | 0.0 | | 0.2461 | 348.0 | 1044 | 0.2451 | 0.0 | 0.0 | | 0.246 | 349.0 | 1047 | 0.2450 | 0.0 | 0.0 | | 0.2463 | 350.0 | 1050 | 0.2449 | 0.0 | 0.0 | | 0.2461 | 351.0 | 1053 | 0.2448 | 0.0 | 0.0 | | 0.2453 | 352.0 | 1056 | 0.2447 | 0.0 | 0.0 | | 0.2458 | 353.0 | 1059 | 0.2446 | 0.0 | 0.0 | | 0.2462 | 354.0 | 1062 | 0.2445 | 0.0 | 0.0 | | 0.2456 | 355.0 | 1065 | 0.2444 | 0.0 | 0.0 | | 0.2454 | 356.0 | 1068 | 0.2443 | 0.0 | 0.0 | | 0.2452 | 357.0 | 1071 | 0.2442 | 0.0 | 0.0 | | 0.2453 | 358.0 | 1074 | 0.2441 | 0.0 | 0.0 | | 0.245 | 359.0 | 1077 | 0.2440 | 0.0 | 0.0 | | 0.2449 | 360.0 | 1080 | 0.2439 | 0.0 | 0.0 | | 0.2446 | 361.0 | 1083 | 0.2438 | 0.0 | 0.0 | | 0.2452 | 362.0 | 1086 | 0.2437 | 0.0 | 0.0 | | 0.2449 | 363.0 | 1089 | 0.2436 | 0.0 | 0.0 | | 0.2444 | 364.0 | 1092 | 0.2435 | 0.0 | 0.0 | | 0.2452 | 365.0 | 1095 | 0.2434 | 0.0 | 0.0 | | 0.2443 | 366.0 | 1098 | 0.2433 | 0.0 | 0.0 | | 0.2441 | 367.0 | 1101 | 0.2432 | 0.0 | 0.0 | | 0.2439 | 368.0 | 1104 | 0.2431 | 0.0 | 0.0 | | 0.2439 | 369.0 | 1107 | 0.2430 | 0.0 | 0.0 | | 0.2443 | 370.0 | 1110 | 0.2429 | 0.0 | 0.0 | | 0.2444 | 371.0 | 1113 | 0.2428 | 0.0 | 0.0 | | 0.243 | 372.0 | 1116 | 0.2427 | 0.0 | 0.0 | | 0.2445 | 373.0 | 1119 | 0.2426 | 0.0 | 0.0 | | 0.2443 | 374.0 | 1122 | 0.2425 | 0.0 | 0.0 | | 0.2434 | 375.0 | 1125 | 0.2424 | 0.0 | 0.0 | | 0.2432 | 376.0 | 1128 | 0.2423 | 0.0 | 0.0 | | 0.2434 | 377.0 | 1131 | 0.2422 | 0.0 | 0.0 | | 0.2438 | 378.0 | 1134 | 0.2422 | 0.0 | 0.0 | | 0.2434 | 379.0 | 1137 | 0.2421 | 0.0 | 0.0 | | 0.2425 | 380.0 | 1140 | 0.2420 | 0.0 | 0.0 | | 0.2428 | 381.0 | 1143 | 0.2419 | 0.0 | 0.0 | | 0.2431 | 382.0 | 1146 | 0.2418 | 0.0 | 0.0 | | 0.2432 | 383.0 | 1149 | 0.2417 | 0.0 | 0.0 | | 0.243 | 384.0 | 1152 | 0.2416 | 0.0 | 0.0 | | 0.2422 | 385.0 | 1155 | 0.2416 | 0.0 | 0.0 | | 0.2422 | 386.0 | 1158 | 0.2415 | 0.0 | 0.0 | | 0.2429 | 387.0 | 1161 | 0.2414 | 0.0 | 0.0 | | 0.2422 | 388.0 | 1164 | 0.2413 | 0.0 | 0.0 | | 0.2426 | 389.0 | 1167 | 0.2412 | 0.0 | 0.0 | | 0.2424 | 390.0 | 1170 | 0.2412 | 0.0 | 0.0 | | 0.2417 | 391.0 | 1173 | 0.2411 | 0.0 | 0.0 | | 0.2421 | 392.0 | 1176 | 0.2410 | 0.0 | 0.0 | | 0.2428 | 393.0 | 1179 | 0.2409 | 0.0 | 0.0 | | 0.2416 | 394.0 | 1182 | 0.2408 | 0.0 | 0.0 | | 0.2418 | 395.0 | 1185 | 0.2408 | 0.0 | 0.0 | | 0.2417 | 396.0 | 1188 | 0.2407 | 0.0 | 0.0 | | 0.2418 | 397.0 | 1191 | 0.2406 | 0.0 | 0.0 | | 0.2414 | 398.0 | 1194 | 0.2405 | 0.0 | 0.0 | | 0.2416 | 399.0 | 1197 | 0.2405 | 0.0 | 0.0 | | 0.242 | 400.0 | 1200 | 0.2404 | 0.0 | 0.0 | | 0.2419 | 401.0 | 1203 | 0.2403 | 0.0 | 0.0 | | 0.2416 | 402.0 | 1206 | 0.2403 | 0.0 | 0.0 | | 0.2415 | 403.0 | 1209 | 0.2402 | 0.0 | 0.0 | | 0.2408 | 404.0 | 1212 | 0.2401 | 0.0 | 0.0 | | 0.2402 | 405.0 | 1215 | 0.2400 | 0.0 | 0.0 | | 0.2412 | 406.0 | 1218 | 0.2400 | 0.0 | 0.0 | | 0.2407 | 407.0 | 1221 | 0.2399 | 0.0 | 0.0 | | 0.2409 | 408.0 | 1224 | 0.2398 | 0.0 | 0.0 | | 0.2408 | 409.0 | 1227 | 0.2398 | 0.0 | 0.0 | | 0.2403 | 410.0 | 1230 | 0.2397 | 0.0 | 0.0 | | 0.241 | 411.0 | 1233 | 0.2396 | 0.0 | 0.0 | | 0.2407 | 412.0 | 1236 | 0.2396 | 0.0 | 0.0 | | 0.2407 | 413.0 | 1239 | 0.2395 | 0.0 | 0.0 | | 0.2402 | 414.0 | 1242 | 0.2394 | 0.0 | 0.0 | | 0.2404 | 415.0 | 1245 | 0.2394 | 0.0 | 0.0 | | 0.2401 | 416.0 | 1248 | 0.2393 | 0.0 | 0.0 | | 0.2393 | 417.0 | 1251 | 0.2392 | 0.0 | 0.0 | | 0.2401 | 418.0 | 1254 | 0.2392 | 0.0 | 0.0 | | 0.2397 | 419.0 | 1257 | 0.2391 | 0.0 | 0.0 | | 0.2408 | 420.0 | 1260 | 0.2390 | 0.0 | 0.0 | | 0.2402 | 421.0 | 1263 | 0.2390 | 0.0 | 0.0 | | 0.24 | 422.0 | 1266 | 0.2389 | 0.0 | 0.0 | | 0.2397 | 423.0 | 1269 | 0.2389 | 0.0 | 0.0 | | 0.2397 | 424.0 | 1272 | 0.2388 | 0.0 | 0.0 | | 0.2394 | 425.0 | 1275 | 0.2387 | 0.0 | 0.0 | | 0.2399 | 426.0 | 1278 | 0.2387 | 0.0 | 0.0 | | 0.2397 | 427.0 | 1281 | 0.2386 | 0.0 | 0.0 | | 0.2394 | 428.0 | 1284 | 0.2386 | 0.0 | 0.0 | | 0.2393 | 429.0 | 1287 | 0.2385 | 0.0 | 0.0 | | 0.2398 | 430.0 | 1290 | 0.2384 | 0.0 | 0.0 | | 0.2393 | 431.0 | 1293 | 0.2384 | 0.0 | 0.0 | | 0.2395 | 432.0 | 1296 | 0.2383 | 0.0 | 0.0 | | 0.2388 | 433.0 | 1299 | 0.2383 | 0.0 | 0.0 | | 0.2393 | 434.0 | 1302 | 0.2382 | 0.0 | 0.0 | | 0.2391 | 435.0 | 1305 | 0.2382 | 0.0 | 0.0 | | 0.2386 | 436.0 | 1308 | 0.2381 | 0.0 | 0.0 | | 0.2389 | 437.0 | 1311 | 0.2380 | 0.0 | 0.0 | | 0.2385 | 438.0 | 1314 | 0.2380 | 0.0 | 0.0 | | 0.2389 | 439.0 | 1317 | 0.2379 | 0.0 | 0.0 | | 0.2382 | 440.0 | 1320 | 0.2379 | 0.0 | 0.0 | | 0.2389 | 441.0 | 1323 | 0.2378 | 0.0 | 0.0 | | 0.2387 | 442.0 | 1326 | 0.2378 | 0.0 | 0.0 | | 0.2389 | 443.0 | 1329 | 0.2377 | 0.0 | 0.0 | | 0.2381 | 444.0 | 1332 | 0.2377 | 0.0 | 0.0 | | 0.2383 | 445.0 | 1335 | 0.2376 | 0.0 | 0.0 | | 0.2384 | 446.0 | 1338 | 0.2376 | 0.0 | 0.0 | | 0.2385 | 447.0 | 1341 | 0.2375 | 0.0 | 0.0 | | 0.2387 | 448.0 | 1344 | 0.2375 | 0.0 | 0.0 | | 0.2379 | 449.0 | 1347 | 0.2374 | 0.0 | 0.0 | | 0.239 | 450.0 | 1350 | 0.2374 | 0.0 | 0.0 | | 0.2389 | 451.0 | 1353 | 0.2373 | 0.0 | 0.0 | | 0.2383 | 452.0 | 1356 | 0.2373 | 0.0 | 0.0 | | 0.2374 | 453.0 | 1359 | 0.2372 | 0.0 | 0.0 | | 0.2383 | 454.0 | 1362 | 0.2372 | 0.0 | 0.0 | | 0.2382 | 455.0 | 1365 | 0.2371 | 0.0 | 0.0 | | 0.238 | 456.0 | 1368 | 0.2371 | 0.0 | 0.0 | | 0.2382 | 457.0 | 1371 | 0.2370 | 0.0 | 0.0 | | 0.2385 | 458.0 | 1374 | 0.2370 | 0.0 | 0.0 | | 0.2375 | 459.0 | 1377 | 0.2369 | 0.0 | 0.0 | | 0.2376 | 460.0 | 1380 | 0.2369 | 0.0 | 0.0 | | 0.2376 | 461.0 | 1383 | 0.2368 | 0.0 | 0.0 | | 0.238 | 462.0 | 1386 | 0.2368 | 0.0 | 0.0 | | 0.2381 | 463.0 | 1389 | 0.2367 | 0.0 | 0.0 | | 0.2375 | 464.0 | 1392 | 0.2367 | 0.0 | 0.0 | | 0.2379 | 465.0 | 1395 | 0.2367 | 0.0 | 0.0 | | 0.237 | 466.0 | 1398 | 0.2366 | 0.0 | 0.0 | | 0.2369 | 467.0 | 1401 | 0.2366 | 0.0 | 0.0 | | 0.2376 | 468.0 | 1404 | 0.2365 | 0.0 | 0.0 | | 0.2371 | 469.0 | 1407 | 0.2365 | 0.0 | 0.0 | | 0.2373 | 470.0 | 1410 | 0.2364 | 0.0 | 0.0 | | 0.2375 | 471.0 | 1413 | 0.2364 | 0.0 | 0.0 | | 0.2363 | 472.0 | 1416 | 0.2363 | 0.0 | 0.0 | | 0.2378 | 473.0 | 1419 | 0.2363 | 0.0 | 0.0 | | 0.2367 | 474.0 | 1422 | 0.2363 | 0.0 | 0.0 | | 0.2374 | 475.0 | 1425 | 0.2362 | 0.0 | 0.0 | | 0.2377 | 476.0 | 1428 | 0.2362 | 0.0 | 0.0 | | 0.2366 | 477.0 | 1431 | 0.2361 | 0.0 | 0.0 | | 0.2364 | 478.0 | 1434 | 0.2361 | 0.0 | 0.0 | | 0.2369 | 479.0 | 1437 | 0.2361 | 0.0 | 0.0 | | 0.2367 | 480.0 | 1440 | 0.2360 | 0.0 | 0.0 | | 0.2363 | 481.0 | 1443 | 0.2360 | 0.0 | 0.0 | | 0.2369 | 482.0 | 1446 | 0.2359 | 0.0 | 0.0 | | 0.2366 | 483.0 | 1449 | 0.2359 | 0.0 | 0.0 | | 0.2369 | 484.0 | 1452 | 0.2359 | 0.0 | 0.0 | | 0.2369 | 485.0 | 1455 | 0.2358 | 0.0 | 0.0 | | 0.2364 | 486.0 | 1458 | 0.2358 | 0.0 | 0.0 | | 0.2363 | 487.0 | 1461 | 0.2357 | 0.0 | 0.0 | | 0.2356 | 488.0 | 1464 | 0.2357 | 0.0 | 0.0 | | 0.2357 | 489.0 | 1467 | 0.2357 | 0.0 | 0.0 | | 0.2359 | 490.0 | 1470 | 0.2356 | 0.0 | 0.0 | | 0.2366 | 491.0 | 1473 | 0.2356 | 0.0 | 0.0 | | 0.2369 | 492.0 | 1476 | 0.2356 | 0.0 | 0.0 | | 0.2364 | 493.0 | 1479 | 0.2355 | 0.0 | 0.0 | | 0.2362 | 494.0 | 1482 | 0.2355 | 0.0 | 0.0 | | 0.2371 | 495.0 | 1485 | 0.2354 | 0.0 | 0.0 | | 0.2361 | 496.0 | 1488 | 0.2354 | 0.0 | 0.0 | | 0.2359 | 497.0 | 1491 | 0.2354 | 0.0 | 0.0 | | 0.2358 | 498.0 | 1494 | 0.2353 | 0.0 | 0.0 | | 0.2362 | 499.0 | 1497 | 0.2353 | 0.0 | 0.0 | | 0.2368 | 500.0 | 1500 | 0.2353 | 0.0 | 0.0 | | 0.236 | 501.0 | 1503 | 0.2352 | 0.0 | 0.0 | | 0.2353 | 502.0 | 1506 | 0.2352 | 0.0 | 0.0 | | 0.2361 | 503.0 | 1509 | 0.2352 | 0.0 | 0.0 | | 0.2362 | 504.0 | 1512 | 0.2351 | 0.0 | 0.0 | | 0.2359 | 505.0 | 1515 | 0.2351 | 0.0 | 0.0 | | 0.2366 | 506.0 | 1518 | 0.2351 | 0.0 | 0.0 | | 0.2355 | 507.0 | 1521 | 0.2350 | 0.0 | 0.0 | | 0.2357 | 508.0 | 1524 | 0.2350 | 0.0 | 0.0 | | 0.2362 | 509.0 | 1527 | 0.2350 | 0.0 | 0.0 | | 0.2362 | 510.0 | 1530 | 0.2349 | 0.0 | 0.0 | | 0.2353 | 511.0 | 1533 | 0.2349 | 0.0 | 0.0 | | 0.2361 | 512.0 | 1536 | 0.2349 | 0.0 | 0.0 | | 0.2359 | 513.0 | 1539 | 0.2348 | 0.0 | 0.0 | | 0.2357 | 514.0 | 1542 | 0.2348 | 0.0 | 0.0 | | 0.236 | 515.0 | 1545 | 0.2348 | 0.0 | 0.0 | | 0.236 | 516.0 | 1548 | 0.2347 | 0.0 | 0.0 | | 0.2356 | 517.0 | 1551 | 0.2347 | 0.0 | 0.0 | | 0.2359 | 518.0 | 1554 | 0.2347 | 0.0 | 0.0 | | 0.236 | 519.0 | 1557 | 0.2346 | 0.0 | 0.0 | | 0.2354 | 520.0 | 1560 | 0.2346 | 0.0 | 0.0 | | 0.2363 | 521.0 | 1563 | 0.2346 | 0.0 | 0.0 | | 0.2346 | 522.0 | 1566 | 0.2346 | 0.0 | 0.0 | | 0.2347 | 523.0 | 1569 | 0.2345 | 0.0 | 0.0 | | 0.235 | 524.0 | 1572 | 0.2345 | 0.0 | 0.0 | | 0.2354 | 525.0 | 1575 | 0.2345 | 0.0 | 0.0 | | 0.2355 | 526.0 | 1578 | 0.2344 | 0.0 | 0.0 | | 0.2345 | 527.0 | 1581 | 0.2344 | 0.0 | 0.0 | | 0.2352 | 528.0 | 1584 | 0.2344 | 0.0 | 0.0 | | 0.2347 | 529.0 | 1587 | 0.2343 | 0.0 | 0.0 | | 0.235 | 530.0 | 1590 | 0.2343 | 0.0 | 0.0 | | 0.2349 | 531.0 | 1593 | 0.2343 | 0.0 | 0.0 | | 0.2353 | 532.0 | 1596 | 0.2343 | 0.0 | 0.0 | | 0.235 | 533.0 | 1599 | 0.2342 | 0.0 | 0.0 | | 0.235 | 534.0 | 1602 | 0.2342 | 0.0 | 0.0 | | 0.2345 | 535.0 | 1605 | 0.2342 | 0.0 | 0.0 | | 0.2348 | 536.0 | 1608 | 0.2341 | 0.0 | 0.0 | | 0.2346 | 537.0 | 1611 | 0.2341 | 0.0 | 0.0 | | 0.2352 | 538.0 | 1614 | 0.2341 | 0.0 | 0.0 | | 0.2347 | 539.0 | 1617 | 0.2341 | 0.0 | 0.0 | | 0.2351 | 540.0 | 1620 | 0.2340 | 0.0 | 0.0 | | 0.2341 | 541.0 | 1623 | 0.2340 | 0.0 | 0.0 | | 0.2345 | 542.0 | 1626 | 0.2340 | 0.0 | 0.0 | | 0.2342 | 543.0 | 1629 | 0.2340 | 0.0 | 0.0 | | 0.2346 | 544.0 | 1632 | 0.2339 | 0.0 | 0.0 | | 0.234 | 545.0 | 1635 | 0.2339 | 0.0 | 0.0 | | 0.2347 | 546.0 | 1638 | 0.2339 | 0.0 | 0.0 | | 0.2347 | 547.0 | 1641 | 0.2339 | 0.0 | 0.0 | | 0.2347 | 548.0 | 1644 | 0.2338 | 0.0 | 0.0 | | 0.234 | 549.0 | 1647 | 0.2338 | 0.0 | 0.0 | | 0.2347 | 550.0 | 1650 | 0.2338 | 0.0 | 0.0 | | 0.2342 | 551.0 | 1653 | 0.2338 | 0.0 | 0.0 | | 0.2342 | 552.0 | 1656 | 0.2337 | 0.0 | 0.0 | | 0.2341 | 553.0 | 1659 | 0.2337 | 0.0 | 0.0 | | 0.234 | 554.0 | 1662 | 0.2337 | 0.0 | 0.0 | | 0.2344 | 555.0 | 1665 | 0.2337 | 0.0 | 0.0 | | 0.2337 | 556.0 | 1668 | 0.2336 | 0.0 | 0.0 | | 0.2342 | 557.0 | 1671 | 0.2336 | 0.0 | 0.0 | | 0.2335 | 558.0 | 1674 | 0.2336 | 0.0 | 0.0 | | 0.2343 | 559.0 | 1677 | 0.2336 | 0.0 | 0.0 | | 0.2337 | 560.0 | 1680 | 0.2335 | 0.0 | 0.0 | | 0.2347 | 561.0 | 1683 | 0.2335 | 0.0 | 0.0 | | 0.2345 | 562.0 | 1686 | 0.2335 | 0.0 | 0.0 | | 0.2339 | 563.0 | 1689 | 0.2335 | 0.0 | 0.0 | | 0.2338 | 564.0 | 1692 | 0.2334 | 0.0 | 0.0 | | 0.2338 | 565.0 | 1695 | 0.2334 | 0.0 | 0.0 | | 0.2343 | 566.0 | 1698 | 0.2334 | 0.0 | 0.0 | | 0.2349 | 567.0 | 1701 | 0.2334 | 0.0 | 0.0 | | 0.2337 | 568.0 | 1704 | 0.2333 | 0.0 | 0.0 | | 0.2344 | 569.0 | 1707 | 0.2333 | 0.0 | 0.0 | | 0.2336 | 570.0 | 1710 | 0.2333 | 0.0 | 0.0 | | 0.2346 | 571.0 | 1713 | 0.2333 | 0.0 | 0.0 | | 0.2339 | 572.0 | 1716 | 0.2333 | 0.0 | 0.0 | | 0.2341 | 573.0 | 1719 | 0.2332 | 0.0 | 0.0 | | 0.2338 | 574.0 | 1722 | 0.2332 | 0.0 | 0.0 | | 0.2337 | 575.0 | 1725 | 0.2332 | 0.0 | 0.0 | | 0.2342 | 576.0 | 1728 | 0.2332 | 0.0 | 0.0 | | 0.2344 | 577.0 | 1731 | 0.2332 | 0.0 | 0.0 | | 0.2339 | 578.0 | 1734 | 0.2331 | 0.0 | 0.0 | | 0.2338 | 579.0 | 1737 | 0.2331 | 0.0 | 0.0 | | 0.2341 | 580.0 | 1740 | 0.2331 | 0.0 | 0.0 | | 0.235 | 581.0 | 1743 | 0.2331 | 0.0 | 0.0 | | 0.2335 | 582.0 | 1746 | 0.2331 | 0.0 | 0.0 | | 0.2337 | 583.0 | 1749 | 0.2330 | 0.0 | 0.0 | | 0.2346 | 584.0 | 1752 | 0.2330 | 0.0 | 0.0 | | 0.2329 | 585.0 | 1755 | 0.2330 | 0.0 | 0.0 | | 0.2339 | 586.0 | 1758 | 0.2330 | 0.0 | 0.0 | | 0.2333 | 587.0 | 1761 | 0.2330 | 0.0 | 0.0 | | 0.2338 | 588.0 | 1764 | 0.2329 | 0.0 | 0.0 | | 0.2332 | 589.0 | 1767 | 0.2329 | 0.0 | 0.0 | | 0.2329 | 590.0 | 1770 | 0.2329 | 0.0 | 0.0 | | 0.2335 | 591.0 | 1773 | 0.2329 | 0.0 | 0.0 | | 0.2334 | 592.0 | 1776 | 0.2329 | 0.0 | 0.0 | | 0.2334 | 593.0 | 1779 | 0.2328 | 0.0 | 0.0 | | 0.2333 | 594.0 | 1782 | 0.2328 | 0.0 | 0.0 | | 0.2329 | 595.0 | 1785 | 0.2328 | 0.0 | 0.0 | | 0.233 | 596.0 | 1788 | 0.2328 | 0.0 | 0.0 | | 0.2327 | 597.0 | 1791 | 0.2328 | 0.0 | 0.0 | | 0.2331 | 598.0 | 1794 | 0.2327 | 0.0 | 0.0 | | 0.2338 | 599.0 | 1797 | 0.2327 | 0.0 | 0.0 | | 0.2344 | 600.0 | 1800 | 0.2327 | 0.0 | 0.0 | | 0.233 | 601.0 | 1803 | 0.2327 | 0.0 | 0.0 | | 0.2336 | 602.0 | 1806 | 0.2327 | 0.0 | 0.0 | | 0.2335 | 603.0 | 1809 | 0.2327 | 0.0 | 0.0 | | 0.2332 | 604.0 | 1812 | 0.2326 | 0.0 | 0.0 | | 0.2331 | 605.0 | 1815 | 0.2326 | 0.0 | 0.0 | | 0.233 | 606.0 | 1818 | 0.2326 | 0.0 | 0.0 | | 0.2327 | 607.0 | 1821 | 0.2326 | 0.0 | 0.0 | | 0.2326 | 608.0 | 1824 | 0.2326 | 0.0 | 0.0 | | 0.2336 | 609.0 | 1827 | 0.2325 | 0.0 | 0.0 | | 0.2331 | 610.0 | 1830 | 0.2325 | 0.0 | 0.0 | | 0.2337 | 611.0 | 1833 | 0.2325 | 0.0 | 0.0 | | 0.2334 | 612.0 | 1836 | 0.2325 | 0.0 | 0.0 | | 0.2332 | 613.0 | 1839 | 0.2325 | 0.0 | 0.0 | | 0.2328 | 614.0 | 1842 | 0.2325 | 0.0 | 0.0 | | 0.2327 | 615.0 | 1845 | 0.2324 | 0.0 | 0.0 | | 0.2331 | 616.0 | 1848 | 0.2324 | 0.0 | 0.0 | | 0.2329 | 617.0 | 1851 | 0.2324 | 0.0 | 0.0 | | 0.2335 | 618.0 | 1854 | 0.2324 | 0.0 | 0.0 | | 0.2334 | 619.0 | 1857 | 0.2324 | 0.0 | 0.0 | | 0.2325 | 620.0 | 1860 | 0.2324 | 0.0 | 0.0 | | 0.2325 | 621.0 | 1863 | 0.2323 | 0.0 | 0.0 | | 0.233 | 622.0 | 1866 | 0.2323 | 0.0 | 0.0 | | 0.233 | 623.0 | 1869 | 0.2323 | 0.0 | 0.0 | | 0.2329 | 624.0 | 1872 | 0.2323 | 0.0 | 0.0 | | 0.2335 | 625.0 | 1875 | 0.2323 | 0.0 | 0.0 | | 0.2321 | 626.0 | 1878 | 0.2323 | 0.0 | 0.0 | | 0.233 | 627.0 | 1881 | 0.2323 | 0.0 | 0.0 | | 0.2331 | 628.0 | 1884 | 0.2322 | 0.0 | 0.0 | | 0.2328 | 629.0 | 1887 | 0.2322 | 0.0 | 0.0 | | 0.2327 | 630.0 | 1890 | 0.2322 | 0.0 | 0.0 | | 0.2334 | 631.0 | 1893 | 0.2322 | 0.0 | 0.0 | | 0.2325 | 632.0 | 1896 | 0.2322 | 0.0 | 0.0 | | 0.233 | 633.0 | 1899 | 0.2322 | 0.0 | 0.0 | | 0.2327 | 634.0 | 1902 | 0.2321 | 0.0 | 0.0 | | 0.2336 | 635.0 | 1905 | 0.2321 | 0.0 | 0.0 | | 0.2328 | 636.0 | 1908 | 0.2321 | 0.0 | 0.0 | | 0.2339 | 637.0 | 1911 | 0.2321 | 0.0 | 0.0 | | 0.2323 | 638.0 | 1914 | 0.2321 | 0.0 | 0.0 | | 0.2323 | 639.0 | 1917 | 0.2321 | 0.0 | 0.0 | | 0.2335 | 640.0 | 1920 | 0.2321 | 0.0 | 0.0 | | 0.2321 | 641.0 | 1923 | 0.2320 | 0.0 | 0.0 | | 0.2327 | 642.0 | 1926 | 0.2320 | 0.0 | 0.0 | | 0.2331 | 643.0 | 1929 | 0.2320 | 0.0 | 0.0 | | 0.2323 | 644.0 | 1932 | 0.2320 | 0.0 | 0.0 | | 0.233 | 645.0 | 1935 | 0.2320 | 0.0 | 0.0 | | 0.2329 | 646.0 | 1938 | 0.2320 | 0.0 | 0.0 | | 0.2324 | 647.0 | 1941 | 0.2320 | 0.0 | 0.0 | | 0.2326 | 648.0 | 1944 | 0.2319 | 0.0 | 0.0 | | 0.2325 | 649.0 | 1947 | 0.2319 | 0.0 | 0.0 | | 0.2327 | 650.0 | 1950 | 0.2319 | 0.0 | 0.0 | | 0.2325 | 651.0 | 1953 | 0.2319 | 0.0 | 0.0 | | 0.2318 | 652.0 | 1956 | 0.2319 | 0.0 | 0.0 | | 0.2327 | 653.0 | 1959 | 0.2319 | 0.0 | 0.0 | | 0.2327 | 654.0 | 1962 | 0.2319 | 0.0 | 0.0 | | 0.2316 | 655.0 | 1965 | 0.2319 | 0.0 | 0.0 | | 0.2327 | 656.0 | 1968 | 0.2318 | 0.0 | 0.0 | | 0.2325 | 657.0 | 1971 | 0.2318 | 0.0 | 0.0 | | 0.2325 | 658.0 | 1974 | 0.2318 | 0.0 | 0.0 | | 0.232 | 659.0 | 1977 | 0.2318 | 0.0 | 0.0 | | 0.2334 | 660.0 | 1980 | 0.2318 | 0.0 | 0.0 | | 0.2324 | 661.0 | 1983 | 0.2318 | 0.0 | 0.0 | | 0.2322 | 662.0 | 1986 | 0.2318 | 0.0 | 0.0 | | 0.2324 | 663.0 | 1989 | 0.2318 | 0.0 | 0.0 | | 0.232 | 664.0 | 1992 | 0.2317 | 0.0 | 0.0 | | 0.2321 | 665.0 | 1995 | 0.2317 | 0.0 | 0.0 | | 0.2316 | 666.0 | 1998 | 0.2317 | 0.0 | 0.0 | | 0.2329 | 667.0 | 2001 | 0.2317 | 0.0 | 0.0 | | 0.2325 | 668.0 | 2004 | 0.2317 | 0.0 | 0.0 | | 0.2323 | 669.0 | 2007 | 0.2317 | 0.0 | 0.0 | | 0.233 | 670.0 | 2010 | 0.2317 | 0.0 | 0.0 | | 0.2318 | 671.0 | 2013 | 0.2317 | 0.0 | 0.0 | | 0.2325 | 672.0 | 2016 | 0.2316 | 0.0 | 0.0 | | 0.2322 | 673.0 | 2019 | 0.2316 | 0.0 | 0.0 | | 0.2319 | 674.0 | 2022 | 0.2316 | 0.0 | 0.0 | | 0.2322 | 675.0 | 2025 | 0.2316 | 0.0 | 0.0 | | 0.2331 | 676.0 | 2028 | 0.2316 | 0.0 | 0.0 | | 0.2321 | 677.0 | 2031 | 0.2316 | 0.0 | 0.0 | | 0.2326 | 678.0 | 2034 | 0.2316 | 0.0 | 0.0 | | 0.2323 | 679.0 | 2037 | 0.2316 | 0.0 | 0.0 | | 0.2324 | 680.0 | 2040 | 0.2316 | 0.0 | 0.0 | | 0.2323 | 681.0 | 2043 | 0.2315 | 0.0 | 0.0 | | 0.2315 | 682.0 | 2046 | 0.2315 | 0.0 | 0.0 | | 0.2326 | 683.0 | 2049 | 0.2315 | 0.0 | 0.0 | | 0.2319 | 684.0 | 2052 | 0.2315 | 0.0 | 0.0 | | 0.2325 | 685.0 | 2055 | 0.2315 | 0.0 | 0.0 | | 0.2312 | 686.0 | 2058 | 0.2315 | 0.0 | 0.0 | | 0.2314 | 687.0 | 2061 | 0.2315 | 0.0 | 0.0 | | 0.231 | 688.0 | 2064 | 0.2315 | 0.0 | 0.0 | | 0.2323 | 689.0 | 2067 | 0.2315 | 0.0 | 0.0 | | 0.2321 | 690.0 | 2070 | 0.2314 | 0.0 | 0.0 | | 0.232 | 691.0 | 2073 | 0.2314 | 0.0 | 0.0 | | 0.2333 | 692.0 | 2076 | 0.2314 | 0.0 | 0.0 | | 0.2329 | 693.0 | 2079 | 0.2314 | 0.0 | 0.0 | | 0.231 | 694.0 | 2082 | 0.2314 | 0.0 | 0.0 | | 0.2318 | 695.0 | 2085 | 0.2314 | 0.0 | 0.0 | | 0.2316 | 696.0 | 2088 | 0.2314 | 0.0 | 0.0 | | 0.2318 | 697.0 | 2091 | 0.2314 | 0.0 | 0.0 | | 0.2315 | 698.0 | 2094 | 0.2314 | 0.0 | 0.0 | | 0.2322 | 699.0 | 2097 | 0.2314 | 0.0 | 0.0 | | 0.2311 | 700.0 | 2100 | 0.2313 | 0.0 | 0.0 | | 0.2323 | 701.0 | 2103 | 0.2313 | 0.0 | 0.0 | | 0.2316 | 702.0 | 2106 | 0.2313 | 0.0 | 0.0 | | 0.2324 | 703.0 | 2109 | 0.2313 | 0.0 | 0.0 | | 0.2318 | 704.0 | 2112 | 0.2313 | 0.0 | 0.0 | | 0.2315 | 705.0 | 2115 | 0.2313 | 0.0 | 0.0 | | 0.2317 | 706.0 | 2118 | 0.2313 | 0.0 | 0.0 | | 0.2312 | 707.0 | 2121 | 0.2313 | 0.0 | 0.0 | | 0.2323 | 708.0 | 2124 | 0.2313 | 0.0 | 0.0 | | 0.2312 | 709.0 | 2127 | 0.2313 | 0.0 | 0.0 | | 0.2316 | 710.0 | 2130 | 0.2312 | 0.0 | 0.0 | | 0.2312 | 711.0 | 2133 | 0.2312 | 0.0 | 0.0 | | 0.232 | 712.0 | 2136 | 0.2312 | 0.0 | 0.0 | | 0.2319 | 713.0 | 2139 | 0.2312 | 0.0 | 0.0 | | 0.2327 | 714.0 | 2142 | 0.2312 | 0.0 | 0.0 | | 0.232 | 715.0 | 2145 | 0.2312 | 0.0 | 0.0 | | 0.2314 | 716.0 | 2148 | 0.2312 | 0.0 | 0.0 | | 0.2319 | 717.0 | 2151 | 0.2312 | 0.0 | 0.0 | | 0.2317 | 718.0 | 2154 | 0.2312 | 0.0 | 0.0 | | 0.2313 | 719.0 | 2157 | 0.2312 | 0.0 | 0.0 | | 0.2322 | 720.0 | 2160 | 0.2312 | 0.0 | 0.0 | | 0.2318 | 721.0 | 2163 | 0.2311 | 0.0 | 0.0 | | 0.2314 | 722.0 | 2166 | 0.2311 | 0.0 | 0.0 | | 0.2309 | 723.0 | 2169 | 0.2311 | 0.0 | 0.0 | | 0.2316 | 724.0 | 2172 | 0.2311 | 0.0 | 0.0 | | 0.2313 | 725.0 | 2175 | 0.2311 | 0.0 | 0.0 | | 0.2319 | 726.0 | 2178 | 0.2311 | 0.0 | 0.0 | | 0.2315 | 727.0 | 2181 | 0.2311 | 0.0 | 0.0 | | 0.2316 | 728.0 | 2184 | 0.2311 | 0.0 | 0.0 | | 0.2317 | 729.0 | 2187 | 0.2311 | 0.0 | 0.0 | | 0.2313 | 730.0 | 2190 | 0.2311 | 0.0 | 0.0 | | 0.2319 | 731.0 | 2193 | 0.2311 | 0.0 | 0.0 | | 0.2315 | 732.0 | 2196 | 0.2311 | 0.0 | 0.0 | | 0.2316 | 733.0 | 2199 | 0.2310 | 0.0 | 0.0 | | 0.2322 | 734.0 | 2202 | 0.2310 | 0.0 | 0.0 | | 0.2316 | 735.0 | 2205 | 0.2310 | 0.0 | 0.0 | | 0.2318 | 736.0 | 2208 | 0.2310 | 0.0 | 0.0 | | 0.2325 | 737.0 | 2211 | 0.2310 | 0.0 | 0.0 | | 0.231 | 738.0 | 2214 | 0.2310 | 0.0 | 0.0 | | 0.2321 | 739.0 | 2217 | 0.2310 | 0.0 | 0.0 | | 0.2317 | 740.0 | 2220 | 0.2310 | 0.0 | 0.0 | | 0.2314 | 741.0 | 2223 | 0.2310 | 0.0 | 0.0 | | 0.2313 | 742.0 | 2226 | 0.2310 | 0.0 | 0.0 | | 0.2315 | 743.0 | 2229 | 0.2310 | 0.0 | 0.0 | | 0.2321 | 744.0 | 2232 | 0.2310 | 0.0 | 0.0 | | 0.2316 | 745.0 | 2235 | 0.2310 | 0.0 | 0.0 | | 0.2318 | 746.0 | 2238 | 0.2309 | 0.0 | 0.0 | | 0.2321 | 747.0 | 2241 | 0.2309 | 0.0 | 0.0 | | 0.2317 | 748.0 | 2244 | 0.2309 | 0.0 | 0.0 | | 0.2301 | 749.0 | 2247 | 0.2309 | 0.0 | 0.0 | | 0.2328 | 750.0 | 2250 | 0.2309 | 0.0 | 0.0 | | 0.233 | 751.0 | 2253 | 0.2309 | 0.0 | 0.0 | | 0.2317 | 752.0 | 2256 | 0.2309 | 0.0 | 0.0 | | 0.2302 | 753.0 | 2259 | 0.2309 | 0.0 | 0.0 | | 0.2309 | 754.0 | 2262 | 0.2309 | 0.0 | 0.0 | | 0.2317 | 755.0 | 2265 | 0.2309 | 0.0 | 0.0 | | 0.2312 | 756.0 | 2268 | 0.2309 | 0.0 | 0.0 | | 0.2311 | 757.0 | 2271 | 0.2309 | 0.0 | 0.0 | | 0.2318 | 758.0 | 2274 | 0.2309 | 0.0 | 0.0 | | 0.231 | 759.0 | 2277 | 0.2309 | 0.0 | 0.0 | | 0.2318 | 760.0 | 2280 | 0.2308 | 0.0 | 0.0 | | 0.2322 | 761.0 | 2283 | 0.2308 | 0.0 | 0.0 | | 0.2322 | 762.0 | 2286 | 0.2308 | 0.0 | 0.0 | | 0.2322 | 763.0 | 2289 | 0.2308 | 0.0 | 0.0 | | 0.231 | 764.0 | 2292 | 0.2308 | 0.0 | 0.0 | | 0.232 | 765.0 | 2295 | 0.2308 | 0.0 | 0.0 | | 0.2321 | 766.0 | 2298 | 0.2308 | 0.0 | 0.0 | | 0.2314 | 767.0 | 2301 | 0.2308 | 0.0 | 0.0 | | 0.2311 | 768.0 | 2304 | 0.2308 | 0.0 | 0.0 | | 0.2321 | 769.0 | 2307 | 0.2308 | 0.0 | 0.0 | | 0.2316 | 770.0 | 2310 | 0.2308 | 0.0 | 0.0 | | 0.232 | 771.0 | 2313 | 0.2308 | 0.0 | 0.0 | | 0.2318 | 772.0 | 2316 | 0.2308 | 0.0 | 0.0 | | 0.2308 | 773.0 | 2319 | 0.2308 | 0.0 | 0.0 | | 0.231 | 774.0 | 2322 | 0.2308 | 0.0 | 0.0 | | 0.2321 | 775.0 | 2325 | 0.2307 | 0.0 | 0.0 | | 0.2315 | 776.0 | 2328 | 0.2307 | 0.0 | 0.0 | | 0.2327 | 777.0 | 2331 | 0.2307 | 0.0 | 0.0 | | 0.231 | 778.0 | 2334 | 0.2307 | 0.0 | 0.0 | | 0.2312 | 779.0 | 2337 | 0.2307 | 0.0 | 0.0 | | 0.2315 | 780.0 | 2340 | 0.2307 | 0.0 | 0.0 | | 0.2311 | 781.0 | 2343 | 0.2307 | 0.0 | 0.0 | | 0.2318 | 782.0 | 2346 | 0.2307 | 0.0 | 0.0 | | 0.2305 | 783.0 | 2349 | 0.2307 | 0.0 | 0.0 | | 0.2314 | 784.0 | 2352 | 0.2307 | 0.0 | 0.0 | | 0.232 | 785.0 | 2355 | 0.2307 | 0.0 | 0.0 | | 0.2316 | 786.0 | 2358 | 0.2307 | 0.0 | 0.0 | | 0.2313 | 787.0 | 2361 | 0.2307 | 0.0 | 0.0 | | 0.2312 | 788.0 | 2364 | 0.2307 | 0.0 | 0.0 | | 0.2306 | 789.0 | 2367 | 0.2307 | 0.0 | 0.0 | | 0.2312 | 790.0 | 2370 | 0.2307 | 0.0 | 0.0 | | 0.2307 | 791.0 | 2373 | 0.2307 | 0.0 | 0.0 | | 0.2321 | 792.0 | 2376 | 0.2306 | 0.0 | 0.0 | | 0.2312 | 793.0 | 2379 | 0.2306 | 0.0 | 0.0 | | 0.2304 | 794.0 | 2382 | 0.2306 | 0.0 | 0.0 | | 0.2309 | 795.0 | 2385 | 0.2306 | 0.0 | 0.0 | | 0.2315 | 796.0 | 2388 | 0.2306 | 0.0 | 0.0 | | 0.2311 | 797.0 | 2391 | 0.2306 | 0.0 | 0.0 | | 0.2305 | 798.0 | 2394 | 0.2306 | 0.0 | 0.0 | | 0.2314 | 799.0 | 2397 | 0.2306 | 0.0 | 0.0 | | 0.2315 | 800.0 | 2400 | 0.2306 | 0.0 | 0.0 | | 0.2309 | 801.0 | 2403 | 0.2306 | 0.0 | 0.0 | | 0.2314 | 802.0 | 2406 | 0.2306 | 0.0 | 0.0 | | 0.231 | 803.0 | 2409 | 0.2306 | 0.0 | 0.0 | | 0.2302 | 804.0 | 2412 | 0.2306 | 0.0 | 0.0 | | 0.2306 | 805.0 | 2415 | 0.2306 | 0.0 | 0.0 | | 0.2315 | 806.0 | 2418 | 0.2306 | 0.0 | 0.0 | | 0.2316 | 807.0 | 2421 | 0.2306 | 0.0 | 0.0 | | 0.2317 | 808.0 | 2424 | 0.2306 | 0.0 | 0.0 | | 0.2308 | 809.0 | 2427 | 0.2306 | 0.0 | 0.0 | | 0.2307 | 810.0 | 2430 | 0.2305 | 0.0 | 0.0 | | 0.2305 | 811.0 | 2433 | 0.2305 | 0.0 | 0.0 | | 0.231 | 812.0 | 2436 | 0.2305 | 0.0 | 0.0 | | 0.231 | 813.0 | 2439 | 0.2305 | 0.0 | 0.0 | | 0.2316 | 814.0 | 2442 | 0.2305 | 0.0 | 0.0 | | 0.2312 | 815.0 | 2445 | 0.2305 | 0.0 | 0.0 | | 0.2315 | 816.0 | 2448 | 0.2305 | 0.0 | 0.0 | | 0.2307 | 817.0 | 2451 | 0.2305 | 0.0 | 0.0 | | 0.2309 | 818.0 | 2454 | 0.2305 | 0.0 | 0.0 | | 0.2319 | 819.0 | 2457 | 0.2305 | 0.0 | 0.0 | | 0.2308 | 820.0 | 2460 | 0.2305 | 0.0 | 0.0 | | 0.2317 | 821.0 | 2463 | 0.2305 | 0.0 | 0.0 | | 0.2312 | 822.0 | 2466 | 0.2305 | 0.0 | 0.0 | | 0.2316 | 823.0 | 2469 | 0.2305 | 0.0 | 0.0 | | 0.231 | 824.0 | 2472 | 0.2305 | 0.0 | 0.0 | | 0.2311 | 825.0 | 2475 | 0.2305 | 0.0 | 0.0 | | 0.2312 | 826.0 | 2478 | 0.2305 | 0.0 | 0.0 | | 0.2312 | 827.0 | 2481 | 0.2305 | 0.0 | 0.0 | | 0.2316 | 828.0 | 2484 | 0.2305 | 0.0 | 0.0 | | 0.2315 | 829.0 | 2487 | 0.2305 | 0.0 | 0.0 | | 0.2312 | 830.0 | 2490 | 0.2305 | 0.0 | 0.0 | | 0.2308 | 831.0 | 2493 | 0.2305 | 0.0 | 0.0 | | 0.231 | 832.0 | 2496 | 0.2304 | 0.0 | 0.0 | | 0.2304 | 833.0 | 2499 | 0.2304 | 0.0 | 0.0 | | 0.2311 | 834.0 | 2502 | 0.2304 | 0.0 | 0.0 | | 0.2308 | 835.0 | 2505 | 0.2304 | 0.0 | 0.0 | | 0.2303 | 836.0 | 2508 | 0.2304 | 0.0 | 0.0 | | 0.2322 | 837.0 | 2511 | 0.2304 | 0.0 | 0.0 | | 0.2304 | 838.0 | 2514 | 0.2304 | 0.0 | 0.0 | | 0.231 | 839.0 | 2517 | 0.2304 | 0.0 | 0.0 | | 0.2313 | 840.0 | 2520 | 0.2304 | 0.0 | 0.0 | | 0.2318 | 841.0 | 2523 | 0.2304 | 0.0 | 0.0 | | 0.2306 | 842.0 | 2526 | 0.2304 | 0.0 | 0.0 | | 0.2314 | 843.0 | 2529 | 0.2304 | 0.0 | 0.0 | | 0.231 | 844.0 | 2532 | 0.2304 | 0.0 | 0.0 | | 0.2309 | 845.0 | 2535 | 0.2304 | 0.0 | 0.0 | | 0.2309 | 846.0 | 2538 | 0.2304 | 0.0 | 0.0 | | 0.2308 | 847.0 | 2541 | 0.2304 | 0.0 | 0.0 | | 0.231 | 848.0 | 2544 | 0.2304 | 0.0 | 0.0 | | 0.2318 | 849.0 | 2547 | 0.2304 | 0.0 | 0.0 | | 0.2313 | 850.0 | 2550 | 0.2304 | 0.0 | 0.0 | | 0.2315 | 851.0 | 2553 | 0.2304 | 0.0 | 0.0 | | 0.2308 | 852.0 | 2556 | 0.2304 | 0.0 | 0.0 | | 0.2314 | 853.0 | 2559 | 0.2304 | 0.0 | 0.0 | | 0.2301 | 854.0 | 2562 | 0.2304 | 0.0 | 0.0 | | 0.2304 | 855.0 | 2565 | 0.2304 | 0.0 | 0.0 | | 0.2302 | 856.0 | 2568 | 0.2304 | 0.0 | 0.0 | | 0.2308 | 857.0 | 2571 | 0.2303 | 0.0 | 0.0 | | 0.2308 | 858.0 | 2574 | 0.2303 | 0.0 | 0.0 | | 0.2313 | 859.0 | 2577 | 0.2303 | 0.0 | 0.0 | | 0.2301 | 860.0 | 2580 | 0.2303 | 0.0 | 0.0 | | 0.2313 | 861.0 | 2583 | 0.2303 | 0.0 | 0.0 | | 0.2308 | 862.0 | 2586 | 0.2303 | 0.0 | 0.0 | | 0.2312 | 863.0 | 2589 | 0.2303 | 0.0 | 0.0 | | 0.2312 | 864.0 | 2592 | 0.2303 | 0.0 | 0.0 | | 0.2302 | 865.0 | 2595 | 0.2303 | 0.0 | 0.0 | | 0.2306 | 866.0 | 2598 | 0.2303 | 0.0 | 0.0 | | 0.2302 | 867.0 | 2601 | 0.2303 | 0.0 | 0.0 | | 0.2312 | 868.0 | 2604 | 0.2303 | 0.0 | 0.0 | | 0.2304 | 869.0 | 2607 | 0.2303 | 0.0 | 0.0 | | 0.2305 | 870.0 | 2610 | 0.2303 | 0.0 | 0.0 | | 0.2303 | 871.0 | 2613 | 0.2303 | 0.0 | 0.0 | | 0.2309 | 872.0 | 2616 | 0.2303 | 0.0 | 0.0 | | 0.2311 | 873.0 | 2619 | 0.2303 | 0.0 | 0.0 | | 0.2318 | 874.0 | 2622 | 0.2303 | 0.0 | 0.0 | | 0.2319 | 875.0 | 2625 | 0.2303 | 0.0 | 0.0 | | 0.2309 | 876.0 | 2628 | 0.2303 | 0.0 | 0.0 | | 0.2312 | 877.0 | 2631 | 0.2303 | 0.0 | 0.0 | | 0.2305 | 878.0 | 2634 | 0.2303 | 0.0 | 0.0 | | 0.231 | 879.0 | 2637 | 0.2303 | 0.0 | 0.0 | | 0.2303 | 880.0 | 2640 | 0.2303 | 0.0 | 0.0 | | 0.2313 | 881.0 | 2643 | 0.2303 | 0.0 | 0.0 | | 0.2309 | 882.0 | 2646 | 0.2303 | 0.0 | 0.0 | | 0.2306 | 883.0 | 2649 | 0.2303 | 0.0 | 0.0 | | 0.2311 | 884.0 | 2652 | 0.2303 | 0.0 | 0.0 | | 0.2305 | 885.0 | 2655 | 0.2303 | 0.0 | 0.0 | | 0.2313 | 886.0 | 2658 | 0.2303 | 0.0 | 0.0 | | 0.2302 | 887.0 | 2661 | 0.2303 | 0.0 | 0.0 | | 0.2315 | 888.0 | 2664 | 0.2302 | 0.0 | 0.0 | | 0.2317 | 889.0 | 2667 | 0.2302 | 0.0 | 0.0 | | 0.2302 | 890.0 | 2670 | 0.2302 | 0.0 | 0.0 | | 0.2303 | 891.0 | 2673 | 0.2302 | 0.0 | 0.0 | | 0.2305 | 892.0 | 2676 | 0.2302 | 0.0 | 0.0 | | 0.2311 | 893.0 | 2679 | 0.2302 | 0.0 | 0.0 | | 0.2309 | 894.0 | 2682 | 0.2302 | 0.0 | 0.0 | | 0.2309 | 895.0 | 2685 | 0.2302 | 0.0 | 0.0 | | 0.2313 | 896.0 | 2688 | 0.2302 | 0.0 | 0.0 | | 0.2306 | 897.0 | 2691 | 0.2302 | 0.0 | 0.0 | | 0.231 | 898.0 | 2694 | 0.2302 | 0.0 | 0.0 | | 0.2307 | 899.0 | 2697 | 0.2302 | 0.0 | 0.0 | | 0.2317 | 900.0 | 2700 | 0.2302 | 0.0 | 0.0 | | 0.2314 | 901.0 | 2703 | 0.2302 | 0.0 | 0.0 | | 0.2303 | 902.0 | 2706 | 0.2302 | 0.0 | 0.0 | | 0.2307 | 903.0 | 2709 | 0.2302 | 0.0 | 0.0 | | 0.2308 | 904.0 | 2712 | 0.2302 | 0.0 | 0.0 | | 0.2315 | 905.0 | 2715 | 0.2302 | 0.0 | 0.0 | | 0.2307 | 906.0 | 2718 | 0.2302 | 0.0 | 0.0 | | 0.2309 | 907.0 | 2721 | 0.2302 | 0.0 | 0.0 | | 0.2305 | 908.0 | 2724 | 0.2302 | 0.0 | 0.0 | | 0.2313 | 909.0 | 2727 | 0.2302 | 0.0 | 0.0 | | 0.2307 | 910.0 | 2730 | 0.2302 | 0.0 | 0.0 | | 0.2306 | 911.0 | 2733 | 0.2302 | 0.0 | 0.0 | | 0.2312 | 912.0 | 2736 | 0.2302 | 0.0 | 0.0 | | 0.231 | 913.0 | 2739 | 0.2302 | 0.0 | 0.0 | | 0.2312 | 914.0 | 2742 | 0.2302 | 0.0 | 0.0 | | 0.2304 | 915.0 | 2745 | 0.2302 | 0.0 | 0.0 | | 0.231 | 916.0 | 2748 | 0.2302 | 0.0 | 0.0 | | 0.2311 | 917.0 | 2751 | 0.2302 | 0.0 | 0.0 | | 0.2307 | 918.0 | 2754 | 0.2302 | 0.0 | 0.0 | | 0.2314 | 919.0 | 2757 | 0.2302 | 0.0 | 0.0 | | 0.2311 | 920.0 | 2760 | 0.2302 | 0.0 | 0.0 | | 0.2305 | 921.0 | 2763 | 0.2302 | 0.0 | 0.0 | | 0.2308 | 922.0 | 2766 | 0.2302 | 0.0 | 0.0 | | 0.2303 | 923.0 | 2769 | 0.2302 | 0.0 | 0.0 | | 0.2303 | 924.0 | 2772 | 0.2302 | 0.0 | 0.0 | | 0.2306 | 925.0 | 2775 | 0.2302 | 0.0 | 0.0 | | 0.2316 | 926.0 | 2778 | 0.2302 | 0.0 | 0.0 | | 0.2307 | 927.0 | 2781 | 0.2302 | 0.0 | 0.0 | | 0.2309 | 928.0 | 2784 | 0.2302 | 0.0 | 0.0 | | 0.2309 | 929.0 | 2787 | 0.2302 | 0.0 | 0.0 | | 0.2305 | 930.0 | 2790 | 0.2302 | 0.0 | 0.0 | | 0.231 | 931.0 | 2793 | 0.2302 | 0.0 | 0.0 | | 0.2305 | 932.0 | 2796 | 0.2302 | 0.0 | 0.0 | | 0.2313 | 933.0 | 2799 | 0.2302 | 0.0 | 0.0 | | 0.2303 | 934.0 | 2802 | 0.2301 | 0.0 | 0.0 | | 0.2298 | 935.0 | 2805 | 0.2301 | 0.0 | 0.0 | | 0.2308 | 936.0 | 2808 | 0.2301 | 0.0 | 0.0 | | 0.2302 | 937.0 | 2811 | 0.2301 | 0.0 | 0.0 | | 0.2307 | 938.0 | 2814 | 0.2301 | 0.0 | 0.0 | | 0.2309 | 939.0 | 2817 | 0.2301 | 0.0 | 0.0 | | 0.2308 | 940.0 | 2820 | 0.2301 | 0.0 | 0.0 | | 0.2304 | 941.0 | 2823 | 0.2301 | 0.0 | 0.0 | | 0.2308 | 942.0 | 2826 | 0.2301 | 0.0 | 0.0 | | 0.2305 | 943.0 | 2829 | 0.2301 | 0.0 | 0.0 | | 0.2311 | 944.0 | 2832 | 0.2301 | 0.0 | 0.0 | | 0.2312 | 945.0 | 2835 | 0.2301 | 0.0 | 0.0 | | 0.2311 | 946.0 | 2838 | 0.2301 | 0.0 | 0.0 | | 0.2305 | 947.0 | 2841 | 0.2301 | 0.0 | 0.0 | | 0.2307 | 948.0 | 2844 | 0.2301 | 0.0 | 0.0 | | 0.2313 | 949.0 | 2847 | 0.2301 | 0.0 | 0.0 | | 0.2311 | 950.0 | 2850 | 0.2301 | 0.0 | 0.0 | | 0.231 | 951.0 | 2853 | 0.2301 | 0.0 | 0.0 | | 0.2304 | 952.0 | 2856 | 0.2301 | 0.0 | 0.0 | | 0.2304 | 953.0 | 2859 | 0.2301 | 0.0 | 0.0 | | 0.2314 | 954.0 | 2862 | 0.2301 | 0.0 | 0.0 | | 0.2312 | 955.0 | 2865 | 0.2301 | 0.0 | 0.0 | | 0.2309 | 956.0 | 2868 | 0.2301 | 0.0 | 0.0 | | 0.2302 | 957.0 | 2871 | 0.2301 | 0.0 | 0.0 | | 0.23 | 958.0 | 2874 | 0.2301 | 0.0 | 0.0 | | 0.2311 | 959.0 | 2877 | 0.2301 | 0.0 | 0.0 | | 0.2308 | 960.0 | 2880 | 0.2301 | 0.0 | 0.0 | | 0.2314 | 961.0 | 2883 | 0.2301 | 0.0 | 0.0 | | 0.2301 | 962.0 | 2886 | 0.2301 | 0.0 | 0.0 | | 0.2309 | 963.0 | 2889 | 0.2301 | 0.0 | 0.0 | | 0.2307 | 964.0 | 2892 | 0.2301 | 0.0 | 0.0 | | 0.2306 | 965.0 | 2895 | 0.2301 | 0.0 | 0.0 | | 0.2304 | 966.0 | 2898 | 0.2301 | 0.0 | 0.0 | | 0.2302 | 967.0 | 2901 | 0.2301 | 0.0 | 0.0 | | 0.2303 | 968.0 | 2904 | 0.2301 | 0.0 | 0.0 | | 0.231 | 969.0 | 2907 | 0.2301 | 0.0 | 0.0 | | 0.2302 | 970.0 | 2910 | 0.2301 | 0.0 | 0.0 | | 0.2307 | 971.0 | 2913 | 0.2301 | 0.0 | 0.0 | | 0.2307 | 972.0 | 2916 | 0.2301 | 0.0 | 0.0 | | 0.231 | 973.0 | 2919 | 0.2301 | 0.0 | 0.0 | | 0.2316 | 974.0 | 2922 | 0.2301 | 0.0 | 0.0 | | 0.2309 | 975.0 | 2925 | 0.2301 | 0.0 | 0.0 | | 0.2307 | 976.0 | 2928 | 0.2301 | 0.0 | 0.0 | | 0.2305 | 977.0 | 2931 | 0.2301 | 0.0 | 0.0 | | 0.2302 | 978.0 | 2934 | 0.2301 | 0.0 | 0.0 | | 0.2306 | 979.0 | 2937 | 0.2301 | 0.0 | 0.0 | | 0.2306 | 980.0 | 2940 | 0.2301 | 0.0 | 0.0 | | 0.2297 | 981.0 | 2943 | 0.2301 | 0.0 | 0.0 | | 0.2301 | 982.0 | 2946 | 0.2301 | 0.0 | 0.0 | | 0.2308 | 983.0 | 2949 | 0.2301 | 0.0 | 0.0 | | 0.2306 | 984.0 | 2952 | 0.2301 | 0.0 | 0.0 | | 0.231 | 985.0 | 2955 | 0.2301 | 0.0 | 0.0 | | 0.2298 | 986.0 | 2958 | 0.2301 | 0.0 | 0.0 | | 0.2303 | 987.0 | 2961 | 0.2301 | 0.0 | 0.0 | | 0.2304 | 988.0 | 2964 | 0.2301 | 0.0 | 0.0 | | 0.2303 | 989.0 | 2967 | 0.2301 | 0.0 | 0.0 | | 0.2308 | 990.0 | 2970 | 0.2301 | 0.0 | 0.0 | | 0.2312 | 991.0 | 2973 | 0.2301 | 0.0 | 0.0 | | 0.2302 | 992.0 | 2976 | 0.2301 | 0.0 | 0.0 | | 0.2298 | 993.0 | 2979 | 0.2301 | 0.0 | 0.0 | | 0.2299 | 994.0 | 2982 | 0.2301 | 0.0 | 0.0 | | 0.231 | 995.0 | 2985 | 0.2301 | 0.0 | 0.0 | | 0.2314 | 996.0 | 2988 | 0.2301 | 0.0 | 0.0 | | 0.2299 | 997.0 | 2991 | 0.2301 | 0.0 | 0.0 | | 0.2304 | 998.0 | 2994 | 0.2301 | 0.0 | 0.0 | | 0.2305 | 999.0 | 2997 | 0.2301 | 0.0 | 0.0 | | 0.23 | 1000.0 | 3000 | 0.2301 | 0.0 | 0.0 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
s3nh/togethercomputer-LLaMA-2-7B-32K-open-Orca-v1-GGML
s3nh
2023-07-31T07:15:35Z
0
9
transformers
[ "transformers", "text-generation-inference", "text-generation", "en", "endpoints_compatible", "region:us" ]
text-generation
2023-07-31T06:50:33Z
--- language: - en tags: - text-generation-inference pipeline_tag: text-generation library_name: transformers --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGML Format model files for [This project](https://huggingface.co/NickyNicky/togethercomputer-LLaMA-2-7B-32K-open-Orca-v1). ### inference ```python import ctransformers from ctransformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file, gpu_layers=32, model_type="llama") manual_input: str = "Tell me about your last dream, please." llm(manual_input, max_new_tokens=256, temperature=0.9, top_p= 0.7) ``` # Original model card
saurabh2086/ppo-LunarLander-v2
saurabh2086
2023-07-31T07:14:52Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T07:14:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.65 +/- 16.98 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
schakal/OUTPUT_DIR
schakal
2023-07-31T07:12:32Z
28
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-31T06:54:44Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - schakal/output_dir This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
JosefJilek/AAT-JPEG-Artefact-Detection
JosefJilek
2023-07-31T07:11:56Z
0
2
tf-keras
[ "tf-keras", "image-classification", "license:creativeml-openrail-m", "region:us" ]
image-classification
2023-07-29T12:36:59Z
--- license: creativeml-openrail-m tags: - image-classification --- # AAT JPEG Artefact Datection Autodetects JPEG artefacts in categories of C100 (no artefacts), C95, C90, C85, C80, C70, C60, C40 and C20 (really terrible artefacts). The input is 1024x1024 large image. Image is converted to gray scale to omit redundant informations and reduce overall training and inference time. Model is self designed and doesn't use any other model as base. # Training stats Around 60k of images was used during training. <img src="https://huggingface.co/JosefJilek/AAT-JPEG-Artefact-Detection/resolve/main/v1.0-stats.png"></img>
AdiOO7/Azure-tickets-Classifier-llama-1
AdiOO7
2023-07-31T07:10:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T07:10:40Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
pradeepiisc/xlm-roberta-base-finetuned-panx-it
pradeepiisc
2023-07-31T07:01:06Z
104
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-31T06:58:28Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.it split: validation args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8188848188848189 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2376 - F1: 0.8189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6997 | 1.0 | 70 | 0.3258 | 0.7623 | | 0.2884 | 2.0 | 140 | 0.2660 | 0.8037 | | 0.1775 | 3.0 | 210 | 0.2376 | 0.8189 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.10.1 - Tokenizers 0.13.3
zwellington/clu-pubhealth-base-1
zwellington
2023-07-31T06:59:19Z
105
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:clupubhealth", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-31T06:56:34Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer datasets: - clupubhealth model-index: - name: clu-pubhealth-base-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clu-pubhealth-base-1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the clupubhealth dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 100 - total_train_batch_size: 800 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
pradeepiisc/xlm-roberta-base-finetuned-panx-fr
pradeepiisc
2023-07-31T06:54:31Z
104
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-30T05:56:36Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8492640839113517 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2606 - F1: 0.8493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5508 | 1.0 | 191 | 0.3140 | 0.7874 | | 0.2698 | 2.0 | 382 | 0.2807 | 0.8322 | | 0.1837 | 3.0 | 573 | 0.2606 | 0.8493 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.10.1 - Tokenizers 0.13.3
c4big/xlm-roberta-base-finetuned-panx-de
c4big
2023-07-31T06:54:00Z
135
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T09:03:10Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8616659101225601 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1329 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2568 | 1.0 | 525 | 0.1583 | 0.8125 | | 0.1261 | 2.0 | 1050 | 0.1458 | 0.8473 | | 0.0823 | 3.0 | 1575 | 0.1329 | 0.8617 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
Tahsin-Mayeesha/wav2vec2-bn-300m
Tahsin-Mayeesha
2023-07-31T06:51:19Z
19
4
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "openslr_SLR53", "robust-speech-event", "bn", "dataset:openslr", "dataset:SLR53", "dataset:Harveenchadha/indic-text", "doi:10.57967/hf/0939", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - bn license: apache-2.0 tags: - automatic-speech-recognition - hf-asr-leaderboard - openslr_SLR53 - robust-speech-event datasets: - openslr - SLR53 - Harveenchadha/indic-text metrics: - wer - cer model-index: - name: Tahsin-Mayeesha/wav2vec2-bn-300m results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: openslr name: Open SLR args: SLR66 metrics: - type: wer value: 0.31104373941386626 name: Test WER - type: cer value: 0.07263099973420006 name: Test CER - type: wer value: 0.17776164652632478 name: Test WER with lm - type: cer value: 0.04394092712884769 name: Test CER with lm --- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset. It achieves the following results on the evaluation set. Without language model : - Wer: 0.3110 - Cer : 0.072 With 5 gram language model trained on [indic-text](https://huggingface.co/datasets/Harveenchadha/indic-text/tree/main) dataset : - Wer: 0.17776 - Cer : 0.04394 Note : 10% of a total 218703 samples have been used for evaluation. Evaluation set has 21871 examples. Training was stopped after 30k steps. Output predictions are available under files section. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0 Note : Training and evaluation script modified from https://huggingface.co/chmanoj/xls-r-300m-te and https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event. Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used. Note 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples. # Citation @misc {tahsin_mayeesha_2023, author = { {Tahsin Mayeesha} }, title = { wav2vec2-bn-300m (Revision e10defc) }, year = 2023, url = { https://huggingface.co/Tahsin-Mayeesha/wav2vec2-bn-300m }, doi = { 10.57967/hf/0939 }, publisher = { Hugging Face } }
narvind2003/llama-2-7b-miniguanaco
narvind2003
2023-07-31T06:06:46Z
8
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:mlabonne/guanaco-llama2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-31T05:36:47Z
--- license: mit datasets: - mlabonne/guanaco-llama2 --- # Finetuning Llama2 7b on 1000 samples from the Guanaco dataset, using QLoRA - quantization + low rank adaptation. The Llama2 model from Meta has several variants. The smallest one containing 7 billion parameters has been finetuned here.
nkpz/llama2-22b-chronos-alpaca-experiment1
nkpz
2023-07-31T05:44:46Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T06:20:39Z
--- license: other --- update: after a lot of fun experiments, I'm doubtful there is a way for this method to really have a positive impact on the outcome without similar resources to those used to train llama in the first place. leaving it up for the sake of mad science, but moving on. not recommended for general use Llama 2 Chronos 13b x Llama 1 Chronos 33b x Alpaca This is a frankenllama model based on the technique in https://huggingface.co/chargoddard/llama2-22b I built my base 22b model by using https://huggingface.co/Oniichat/llama2-base-chronos-13b-merge as a base, and https://huggingface.co/elinas/chronos-33b as a donor. I then trained a qlora on the Alpaca dataset with the default peft configuration from https://github.com/facebookresearch/llama-recipes/blob/main/quickstart.ipynb This is the result of baking in that adapter. This configuration only targets `q_proj` and `v_proj` and uses `r=8`. I was expecting to need to add more targets and increase `r` to get significant improvements, but I was surprised by the quality of its context awareness, and I'm starting to think that maybe a 32mb lora is all it takes to get decent results in 22b. I will keep playing with other peft configurations and see where that gets me next. If anyone wants the chronos 22b base model (requires fine tuning) or the adapter, lmk in community discussions.
kcsteam1/0731_switch_ver1.1
kcsteam1
2023-07-31T05:39:09Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T05:39:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
pratsy/rl_course_vizdoom_health_gathering_supreme
pratsy
2023-07-31T05:25:17Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-31T04:45:37Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.85 +/- 5.17 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r pratsy/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
reichenbach/whisper-small-dv
reichenbach
2023-07-31T05:02:46Z
88
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dv", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-30T11:12:46Z
--- language: - dv license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 model-index: - name: Whisper Small Dv - RCV results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Dv - RCV This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 4000 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
himanimaheshwari3/distilbert-base-uncased-finetuned-himani3
himanimaheshwari3
2023-07-31T04:57:32Z
70
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-31T04:56:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: himanimaheshwari3/distilbert-base-uncased-finetuned-himani3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # himanimaheshwari3/distilbert-base-uncased-finetuned-himani3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: nan - Validation Loss: nan - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -973, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | nan | nan | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.1 - Tokenizers 0.13.3
NasimB/all-base-miss-simple_wikipedia-seed
NasimB
2023-07-31T04:48:13Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-31T01:31:45Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: all-base-miss-simple_wikipedia-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-base-miss-simple_wikipedia-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.1584 | 0.34 | 500 | 5.3866 | | 4.8808 | 0.69 | 1000 | 5.0175 | | 4.5857 | 1.03 | 1500 | 4.7969 | | 4.3352 | 1.37 | 2000 | 4.6646 | | 4.2292 | 1.72 | 2500 | 4.5512 | | 4.1096 | 2.06 | 3000 | 4.4719 | | 3.9263 | 2.4 | 3500 | 4.4212 | | 3.8869 | 2.75 | 4000 | 4.3514 | | 3.7965 | 3.09 | 4500 | 4.3099 | | 3.6202 | 3.43 | 5000 | 4.2878 | | 3.6126 | 3.77 | 5500 | 4.2471 | | 3.5078 | 4.12 | 6000 | 4.2366 | | 3.3541 | 4.46 | 6500 | 4.2332 | | 3.3516 | 4.8 | 7000 | 4.2141 | | 3.2659 | 5.15 | 7500 | 4.2191 | | 3.1699 | 5.49 | 8000 | 4.2194 | | 3.1664 | 5.83 | 8500 | 4.2184 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
NasimB/wikipedia-rarity-seed
NasimB
2023-07-31T04:40:19Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-30T01:20:08Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: wikipedia-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wikipedia-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3409 | 0.29 | 500 | 5.3428 | | 5.033 | 0.58 | 1000 | 4.9253 | | 4.7142 | 0.87 | 1500 | 4.6930 | | 4.4485 | 1.16 | 2000 | 4.5572 | | 4.3033 | 1.46 | 2500 | 4.4453 | | 4.1974 | 1.75 | 3000 | 4.3414 | | 4.0926 | 2.04 | 3500 | 4.2704 | | 3.8991 | 2.33 | 4000 | 4.2223 | | 3.8776 | 2.62 | 4500 | 4.1638 | | 3.839 | 2.91 | 5000 | 4.1187 | | 3.649 | 3.2 | 5500 | 4.1116 | | 3.5929 | 3.49 | 6000 | 4.0826 | | 3.578 | 3.78 | 6500 | 4.0491 | | 3.5017 | 4.07 | 7000 | 4.0476 | | 3.3241 | 4.37 | 7500 | 4.0424 | | 3.3199 | 4.66 | 8000 | 4.0295 | | 3.3065 | 4.95 | 8500 | 4.0145 | | 3.1679 | 5.24 | 9000 | 4.0278 | | 3.141 | 5.53 | 9500 | 4.0272 | | 3.1361 | 5.82 | 10000 | 4.0264 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
hoang14/chatbot_qlora_20230731_0433
hoang14
2023-07-31T04:38:24Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-31T04:38:23Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
liuhaotian/llava-llama-2-13b-chat-lightning-gptq
liuhaotian
2023-07-31T04:35:28Z
14
8
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "region:us" ]
text-generation
2023-07-31T03:46:31Z
--- inference: false --- <br> <br> # LLaVA GPTQ Model Card **NOTE: To use the GPTQ quantized LLaVA checkpoints, you need to use `text-generation-webui`, and the support for LLaMA-2 is WIP. We are working on the PR.** You can try it out [here](https://github.com/haotian-liu/text-generation-webui). Instructions and detailed stories here: https://github.com/haotian-liu/LLaVA/issues/310#issuecomment-1657552223 PR: https://github.com/oobabooga/text-generation-webui/pull/3377 These files are GPTQ model files for [LLaVA-LLaMA-2-13B-Chat-Preview](https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview). ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-LLaMA-2-13B-Chat-Preview was trained in July 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 80K GPT-generated multimodal instruction-following data. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs. We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset. See https://llava-vl.github.io/ for more details.
liuhaotian/llava-llama-2-13b-chat-lightning-preview
liuhaotian
2023-07-31T04:11:25Z
2,069
44
transformers
[ "transformers", "pytorch", "llava", "text-generation", "autotrain_compatible", "region:us" ]
text-generation
2023-07-19T07:38:47Z
--- inference: false --- <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-LLaMA-2-13B-Chat-Preview was trained in July 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 80K GPT-generated multimodal instruction-following data. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs. We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset. See https://llava-vl.github.io/ for more details.
Tornado24/dreambooth_sd15_jay_lora
Tornado24
2023-07-31T03:40:58Z
28
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-31T03:23:56Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a DSLR photo of a sks jay standing on a large basket of rainbow macarons tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Tornado24/dreambooth_sd15_jay_lora This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a DSLR photo of a sks jay standing on a large basket of rainbow macarons using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.