modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Fatin757/ssf-retriever-modernbert-v7
|
Fatin757
| 2025-09-23T07:32:02Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:7540",
"loss:MultipleNegativesRankingLoss",
"dataset:Fatin757/ssf-train-valid_v7",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T07:31:55Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:7540
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: The Scriptwriter/Writer is responsible for creating blueprints
and details of the script based on the concept or idea. With a deep understanding
of the storyline, the target audience and the requirements of the creative leadership
teams, he/she develops the story elements to translate the creative vision into
a beautiful story for production. He works closely with the production teams to
review and revise the script based on inputs to fit the potential audience appeal
and enhance the suitability and marketability of the production. During the development
process, he frequently reviews the work to ensure it meets required editorial
standards. He also flags the possibility of legalities that may occur in view
of the regulatory requirements and local needs of the primary market and audience.
He is expected to work under pressure so as to manage edits within a short time
frame. He may be required to travel depending on the production requirements.
He should have an understanding on how productions affect audiences and be familiar
with the current formats of presenting screenplays. He should be well versed with
script-writing guidelines and techniques to be able to develop a full-length script
that is production ready within required deadlines. He should also have a fundamental
understanding of the process of translating scripts to various visual media, as
well as knowledge of script requirements for immersive content. He should possess
strong grammar and writing capability as well as creativity, patience, self-motivation
and resilience, with an excellent understanding of production processes.
sentences:
- 'The Senior Script Editor leads a team to oversee the refinement and finalization
of scripts, ensuring alignment with creative vision and production standards.
They work extensively with directors and producers to approve script revisions
and manage legal clearances but do not typically write original scripts themselves.
This role focuses more on editorial oversight and coordination than on initial
script creation. Travel is frequent to various production sites to supervise script
implementation and ensure compliance with regulatory frameworks.
The Content Writer for digital marketing develops engaging copy and content strategies
tailored for online platforms. They collaborate with marketing teams to optimize
content for SEO and audience engagement but do not create scripts for visual media
productions. Their work involves writing blogs, social media posts, and promotional
materials rather than full-length scripts, with an emphasis on quick turnaround
and analytics-driven content performance.
The Screenplay Consultant advises film and TV productions on script structure,
character development, and market trends. While they provide expert feedback and
marketability assessments, they do not write or revise scripts directly. The role
requires extensive knowledge of production processes and audience preferences
but is primarily advisory, often working remotely without travel requirements.'
- The Business Controller/Finance Director acts as the key financial advisor and
partner to all business units within the organisation. This role entails offering
expert accounting guidance to stakeholders to enhance organisational value while
mitigating risks in line with both external regulations and internal policies.
The Business Controller/Finance Director excels in building strong relationships
and exploring new business opportunities. Additionally, they play a vital role
in financial planning and analysis, supporting management decisions, managing
operational risks, and ensuring effective business performance through profitability
and operational reviews. The role also includes responsibilities such as recruitment,
performance evaluation, and identifying training needs for staff across the organisation.
- The Scriptwriter/Writer crafts detailed script blueprints based on concepts or
ideas. With a thorough grasp of the storyline, target audience, and creative leadership
needs, they develop story elements that bring the creative vision to life for
production. Collaborating closely with production teams, they revise scripts according
to feedback to enhance audience appeal and marketability. Throughout development,
they ensure editorial standards are met and identify potential legal issues related
to regulatory and local market requirements. The role demands managing edits under
tight deadlines and may involve travel based on production needs. The Scriptwriter/Writer
understands audience impact, current screenplay formats, and script-writing techniques
to deliver production-ready full-length scripts on time. They also have foundational
knowledge of adapting scripts for various visual media and immersive content,
paired with strong grammar, creativity, patience, self-motivation, resilience,
and a solid understanding of production processes.
- source_sentence: The Executive - Localisation coordinates internal and external
processes to execute the localisation of the organisation's content for delivery
to specific territories. He/She maintains day-to-day communication with internal
localisation teams and vendors to monitor the progress of specific projects. He
is also responsible for communicating expected quality standards for localisation
assets to internal localisation teams and localisation vendors. The work involves
a high level of coordination and communication with internal and external stakeholders.
He spends most of his time liaising with external vendors as well as internal
teams for content localisation. He is expected to be effective at planning and
stakeholder management in order to coordinate with all stakeholders involved in
the localisation processes and projects.
sentences:
- The Strategy & Governance Director/Assistant Director leads the development and
implementation of the organisation's strategic plans and governance frameworks.
They are responsible for managing risk, ensuring compliance with governance standards,
and collaborating with the Executive Committee, Council, or Board to identify
new opportunities for sustainable growth. This role includes coordinating board
and management meetings, preparing and presenting reports, and guiding the organisation’s
budgeting process. The ideal candidate is strategic, analytical, risk-aware, and
skilled at communicating complex decisions to senior leadership and stakeholders.
- "The Senior Localisation Manager leads the localisation strategy and oversees\
\ multiple teams to deliver content localisation across global markets, focusing\
\ on high-level project management and vendor negotiations. \nThe Content Marketing\
\ Executive coordinates marketing campaigns and liaises with internal creative\
\ teams and external agencies to promote products within local markets. \nThe\
\ Executive - Translation Services handles the translation of documents and ensures\
\ linguistic accuracy but does not manage the broader localisation process or\
\ vendor relationships."
- The Executive - Localisation manages both internal and external workflows to ensure
the organisation’s content is accurately localised for targeted regions. He/She
maintains continuous communication with internal localisation teams and external
vendors to track project progress and ensures all localisation outputs meet established
quality standards. This role requires strong coordination and communication skills
to work effectively with various stakeholders. The Executive spends a significant
portion of time collaborating with vendors and internal teams to facilitate content
localisation and is expected to excel in planning and stakeholder engagement to
successfully oversee localisation projects.
- source_sentence: A Senior Principal Occupational Therapy Manager sets the strategic
direction of the department and leads occupational therapists in cluster-wide
initiatives to enhance clinical innovation and evidence-based practice. S/He leads
change by implementing new or revising policies and driving the corporate governance
agenda. S/He is in charge of leading improvements in service delivery and the
care model and plans strategies to promote these new improvements and new clinical
services. S/He ensures that there is sufficient human resources in the department
and manages the budgets in the clinical setting. Her/His core function will be
in managerial work, but s/he will also perform some clinical, educational and
research tasks in the course of her/his day-to-day work. S/He may work in various
settings such as but not limited to public and private institutions, acute and
community hospitals, rehabilitation centres, voluntary welfare organisations,
schools, integrated and long-term care facilities and clients homes and work environments.
S/He may also work as part of collaborative, interdisciplinary teams which may
include teachers, nurses, doctors, audiologists, psychologists, social workers,
physiotherapists and speech therapists. S/He should be visionary, driven and decisive.
S/He should possess effective interpersonal, team-building and leadership skills.
sentences:
- A Senior Principal Occupational Therapy Manager is responsible for setting the
strategic vision of the department and guiding occupational therapists in cluster-wide
programs to advance clinical innovation and evidence-based practices. This role
involves leading transformation by introducing new or updated policies and championing
corporate governance initiatives. The manager oversees enhancements in service
delivery and care models, developing strategies to promote these advances and
new clinical services. They ensure adequate staffing levels within the department
and handle budget management in clinical environments. While primarily focused
on managerial duties, the role also includes clinical, educational, and research
responsibilities. The Senior Principal Occupational Therapy Manager may operate
in diverse settings such as public and private healthcare institutions, acute
and community hospitals, rehabilitation centers, voluntary welfare organizations,
schools, integrated and long-term care facilities, as well as clients’ homes and
workplaces. Collaboration with interdisciplinary teams—including teachers, nurses,
doctors, audiologists, psychologists, social workers, physiotherapists, and speech
therapists—is integral. The ideal candidate is visionary, motivated, decisive,
and demonstrates strong interpersonal, leadership, and team-building capabilities.
- The Sales and Purchase Broker serves as a mediator between ship buyers and sellers,
managing the transaction process and ensuring adherence to all relevant legal
and regulatory standards. This role involves evaluating the feasibility and risks
associated with new business prospects and analyzing risk management information
to alert management to possible issues. Additionally, the broker offers guidance
and hands-on training to junior team members in their routine tasks.
- 'The Principal Physiotherapy Manager leads the physiotherapy department by setting
clinical priorities and directing physiotherapists across multiple sites to improve
patient rehabilitation outcomes. This role focuses on managing clinical protocols,
facilitating policy updates, and ensuring compliance with healthcare regulations.
The manager supervises staffing and resource allocation while coordinating budget
expenditures in rehabilitation settings. Besides managerial responsibilities,
the role requires active involvement in patient treatment, training junior therapists,
and conducting clinical research. Work environments include hospitals, outpatient
clinics, community care centers, and specialized rehabilitation units. Collaboration
with multidisciplinary teams such as occupational therapists, nurses, doctors,
and social workers is expected. The successful candidate should be proactive,
collaborative, and possess strong leadership and communication skills.
The Senior Principal Occupational Therapy Manager in a mental health setting develops
strategic initiatives to enhance psychosocial rehabilitation programs. This position
emphasizes policy development, governance adherence, and service model redesign
specifically for mental health occupational therapy services. Responsibilities
include resource planning, budget oversight, and leading clinical education tailored
to psychiatric care. The role involves working closely with psychiatrists, psychologists,
social workers, and peer support specialists in hospital and community mental
health facilities. The incumbent must be innovative, resilient, and skilled in
interdisciplinary collaboration and team leadership.
The Senior Principal Occupational Therapy Manager specializing in pediatric care
directs department strategies to improve therapeutic interventions for children
with developmental disabilities. The role includes policy formulation, governance,
and advancing pediatric occupational therapy practices across multiple community
and hospital settings. Responsibilities encompass human resource management, budget
control, and conducting clinical research focused'
- source_sentence: The Operations Risk and Control Analyst acts as the first line
of defence by assisting the management of day-to-day risks. He/She will be responsible
for identifying, analysing and documenting operational risk events and incidents
for further investigation. He also supports the team in the development and implementation
of risk procedures, detailing out required processes, controls and governance
standards for all relevant processes. The Operations Risk and Control Analyst
is both logical and analytical as his tasks involve monitoring and tracking risks.
He is numerically inclined and comfortable with documentation and analysis tasks.
He is familiar with spreadsheet software to handle data efficiently.
sentences:
- Assistant Civil and Structural Engineer job openings in Singapore
- 'The Senior Operations Risk Manager leads the risk management team by overseeing
strategic risk assessments and coordinating mitigation plans across multiple departments.
This role focuses on high-level risk governance and policy development rather
than day-to-day operational risk tracking. The Senior Manager also liaises with
external auditors and ensures compliance with regulatory requirements, requiring
extensive experience in risk frameworks and leadership skills. Advanced data analytics
tools and risk management software expertise are preferred.
The Business Continuity Analyst is responsible for developing and testing business
continuity plans to ensure organizational resilience during disruptions. This
role involves coordinating recovery strategies and conducting impact analyses
rather than managing daily operational risks. The analyst collaborates with various
business units to implement continuity procedures and maintains documentation
related to crisis management and recovery protocols.
The Compliance Risk Analyst focuses on regulatory compliance risks by monitoring
adherence to legal and internal policies. This position entails conducting compliance
audits, reporting violations, and recommending corrective actions to mitigate
compliance risks. The analyst uses compliance management systems and engages with
regulatory bodies, differing from operational risk monitoring and control activities.'
- The Operations Risk and Control Analyst serves as the initial line of defence
by supporting the management of daily operational risks. This role involves identifying,
analyzing, and documenting risk events and incidents for subsequent review. The
analyst also contributes to the formulation and enforcement of risk procedures,
outlining necessary processes, controls, and governance standards across relevant
operations. With a logical and analytical mindset, the analyst monitors and tracks
risks while handling documentation and data analysis. Proficiency in spreadsheet
software is essential for efficient data management.
- source_sentence: The Managing Director establishes the business strategies for the
organisation and develops plans to enable execution of the business strategies.
He/She is responsible for tracking market development and trends to inform strategic
decision making and ensure the organisation remains current with the changing
face of the sector. He leads the organisation's business development efforts to
get more projects and grow the business. He also drives the adoption of innovation
and new technology to continuously improve the productivity and efficiency of
the workforce. The work involves strategic goal setting, business development
and business leadership. A significant part of his time goes into external meetings
with potential clients for the purpose of business development. He also spends
his time developing strategies and plans, and reviewing business and operational
performance. He is a strategic thinker and business planner. He is an able leader
who guides the organisation and the management in the execution of business plans.
He should also be an effective communicator in order to influence external stakeholders.
sentences:
- business strategy, market analysis, strategic planning, business development,
leadership, innovation adoption, technology integration, client relationship management,
performance review, communication skills
- Demurrage and laytime manager jobs in Singapore
- culinary arts, fashion design, gardening, animal care, music theory, painting,
carpentry, automotive repair
datasets:
- Fatin757/ssf-train-valid_v7
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Fatin757/ssf-retriever-modernbert-v7")
# Run inference
sentences = [
"The Managing Director establishes the business strategies for the organisation and develops plans to enable execution of the business strategies. He/She is responsible for tracking market development and trends to inform strategic decision making and ensure the organisation remains current with the changing face of the sector. He leads the organisation's business development efforts to get more projects and grow the business. He also drives the adoption of innovation and new technology to continuously improve the productivity and efficiency of the workforce. The work involves strategic goal setting, business development and business leadership. A significant part of his time goes into external meetings with potential clients for the purpose of business development. He also spends his time developing strategies and plans, and reviewing business and operational performance. He is a strategic thinker and business planner. He is an able leader who guides the organisation and the management in the execution of business plans. He should also be an effective communicator in order to influence external stakeholders.",
'business strategy, market analysis, strategic planning, business development, leadership, innovation adoption, technology integration, client relationship management, performance review, communication skills',
'culinary arts, fashion design, gardening, animal care, music theory, painting, carpentry, automotive repair',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.6303, -0.0021],
# [ 0.6303, 1.0000, 0.0045],
# [-0.0021, 0.0045, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ssf-train-valid_v7
* Dataset: [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) at [0ec0099](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7/tree/0ec0099d857a1d64007ef973b5a481addf88d623)
* Size: 7,540 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 57 tokens</li><li>mean: 168.08 tokens</li><li>max: 380 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 74.36 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 79.92 tokens</li><li>max: 372 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Prop Designers are responsible for identifying and designing appropriate props for a production. They typically work closely with Stage Managers and Set Designers to design and create props that match the style and period of the production. They understand and utilise different tools, methods and materials to create props that look authentic and can produce the desired effects. They are responsible for estimating cost of props and ensuring any purchases and/or rentals fall within the budget. They also manage the prop team's schedule.</code> | <code>The Prop Designer is tasked with selecting and crafting suitable props for theatrical productions. Collaborating closely with Stage Managers and Set Designers, they ensure the props align with the production’s style and era. They apply various tools, techniques, and materials to produce authentic-looking props that achieve the intended visual effects. Additionally, they estimate prop costs and manage procurement or rentals within budget constraints, while overseeing the scheduling of the prop team.</code> | <code>The Retail Store Manager oversees daily retail operations, manages inventory levels, and trains staff to deliver excellent customer service. They ensure the store meets sales targets and maintain a clean, organized shopping environment.<br><br>The Software Developer designs, codes, and tests software applications. They collaborate with cross-functional teams to develop new features and fix bugs, ensuring the software performs efficiently and meets user requirements.<br><br>The Human Resources Coordinator assists with recruitment, employee onboarding, and maintaining personnel records. They support HR initiatives and help facilitate employee engagement programs.</code> |
| <code>The Area Manager/District Manager oversees the operations of a group of stores in a given area/district. He/she is responsible for developing business opportunities, managing the areas operational and service excellence plans. In addition, he oversees the order fulfilment processes for customers to ensure seamless customer experience across all channels. He is also responsible for driving the organisations innovation and productivity aspirations across the group of stores. He operates in a fast-paced environment where he is required to attend to operational and service excellence issues across a group of stores with varied characteristics. He promotes a positive working culture across stores and drives the achievement of sales results. He is energetic, adaptable, highly-driven and sales-oriented. He also possesses strong people management skills and is able to engage with management and key stakeholders.</code> | <code>The Area Manager/District Manager is responsible for managing multiple store locations within a specified region. This role involves identifying new business opportunities, overseeing operational and customer service standards, and ensuring efficient order fulfillment to provide a consistent customer experience across all sales channels. The manager leads efforts to enhance innovation and productivity throughout the stores, working in a dynamic environment that requires quick resolution of operational challenges. They foster a positive work environment, motivate teams to achieve sales targets, and demonstrate strong leadership and stakeholder engagement abilities.</code> | <code>The Software Developer designs, codes, and tests software applications to meet user requirements. They collaborate with cross-functional teams to develop scalable solutions and maintain existing systems. This role requires proficiency in programming languages, problem-solving skills, and the ability to work in an agile environment.<br><br>The Graphic Designer creates visual concepts to communicate ideas that inspire, inform, or captivate consumers. They develop layouts for advertisements, brochures, and digital media, using design software and collaborating with marketing teams.<br><br>The Human Resources Coordinator supports recruitment processes, manages employee records, and assists with training and development programs. They ensure compliance with company policies and foster a positive workplace culture.</code> |
| <code>The Cluster Manager oversees the daily operations in the deployment of the team across Centres and ensures the team operates in compliance with all policies. He/she also manages manpower resources, including onboarding and staff development. He possesses strong leadership skills and is able to build and leverage effective relationships with stakeholders. He also drives the overall initiatives for cross-Centre programmes, curricula and quality of learning.</code> | <code>Team management, operational compliance, manpower planning, staff onboarding, leadership skills, stakeholder engagement, cross-Centre program coordination, curriculum development, quality assurance in learning</code> | <code>Graphic design, culinary arts, automotive repair, fashion merchandising, wildlife conservation, dance choreography, marine biology, event photography</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### ssf-train-valid_v7
* Dataset: [ssf-train-valid_v7](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7) at [0ec0099](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v7/tree/0ec0099d857a1d64007ef973b5a481addf88d623)
* Size: 1,885 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 57 tokens</li><li>mean: 168.0 tokens</li><li>max: 403 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 72.26 tokens</li><li>max: 243 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 80.24 tokens</li><li>max: 376 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Assistant Technical Superintendent monitors ship operations and evaluates technical aspects of vessels for maintenance needs. He/She collaborates with vessel operators to develop the proper technical repair plans to address identified maintenance needs, and supervises maintenance procedures to ensure compliance with port rules and regulations, as well as international codes and regulations, including the International Maritime Organisation (IMO) code, International Labour Organisation (ILO) regulations, the International Safety Management (ISM) code, International Ship and Port Facility Security (ISPS) code, Maritime Labour Convention (MLC) regulations, and relevant ISO standards. He is also in-charge of crew-level administration matters. He is flexible and possesses strong initiative and good communication skills</code> | <code>The Assistant Technical Superintendent oversees vessel operations and assesses the technical condition of ships to determine maintenance requirements. He/She works closely with vessel operators to formulate appropriate technical repair plans and supervises maintenance activities to ensure adherence to port regulations and international standards, including IMO, ILO, ISM, ISPS, MLC codes, and applicable ISO standards. Additionally, he manages crew administration tasks and demonstrates flexibility, strong initiative, and effective communication skills.</code> | <code>The Senior Technical Superintendent directs ship operations and leads the technical management of multiple vessels, including strategic planning for fleet maintenance and compliance with international maritime conventions such as SOLAS and MARPOL, while overseeing a team of junior superintendents and engineers. <br>The Assistant Marine Engineer is responsible for monitoring engine performance and mechanical systems on board, coordinating routine machinery maintenance, and ensuring compliance with technical safety standards, including ISO certifications and environmental regulations, but does not handle crew administration. <br>The Port Operations Coordinator manages day-to-day port logistics and vessel scheduling, liaising with shipping agents and port authorities to facilitate cargo handling and berth assignments, focusing on operational efficiency rather than technical ship maintenance or maritime regulatory compliance.</code> |
| <code>The Business Intelligence Manager identifies and translates market opportunities into actionable recommendations for the organisation. He/She supervises professionals in gathering and analysing business intelligence (BI) data to help make informed business decisions. He manages the timely reporting of data analysis outcomes and effectively communicates findings, insights and recommendations to business leaders. He develops data and/or information quality metrics and researches new technology and develops business cases to support enterprise wide business intelligence solutions. He is responsible for developing guidelines on data insight reporting for the team. He is also responsible for managing BI-related projects from end to end. He manages a team and is proficient in the analytics tools and techniques required by the organisation. He is also familiar with the relevant software platforms on which the solution is deployed on. The BI Manager has a deep passion for analysing and resolvi...</code> | <code>Business intelligence, data analysis, market opportunity identification, reporting, data quality metrics, analytics tools, BI software platforms, project management, stakeholder engagement, problem-solving, business case development, team management</code> | <code>Culinary arts, fashion design, landscape gardening, automotive repair, creative writing, performing arts, veterinary care, carpentry, event planning, childcare</code> |
| <code>The Head of IT Audit develops the organisation's IT audit framework to manage regulatory and operational risks to safeguard IT assets. He/She defines key objectives and guiding principles for the formulation of IT risk management programs, as well as procedures for documenting and updating policies, standards, guidelines relating to the management of IT assets. He advices on the development of IT audit plans and ensures that audit plans comply with regulatory, operational, security risks and relevant internal auditing standards. He oversees the conduct of audits, respective investigations into non-compliance and risks identified from audits. He overlooks new IT policies, systems and processes necessary for enhancing IT controls and mitigate risks. He consults with and advises senior leaders regarding internal controls and security procedures, prepares activity and progress reports relating to the IT audit function. He also guide team members on procedures, technical problems, prioritie...</code> | <code>IT audit framework, regulatory risk management, operational risk management, IT asset safeguarding, IT risk management programs, IT policies and standards, audit planning, compliance with internal auditing standards, IT controls, risk mitigation, internal controls advisory, security procedures, audit investigations, audit reporting, leadership in IT audit, technology risk management, stakeholder influence</code> | <code>Retail sales strategies, customer relationship management, visual merchandising, inventory stocktaking, cashier operations, food service management, hospitality guest services, event planning logistics</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 1.0 | 15 | 0.3405 | 0.0202 |
| 2.0 | 30 | 0.0262 | 0.0092 |
| 3.0 | 45 | 0.0161 | 0.0071 |
| 4.0 | 60 | 0.0117 | 0.0061 |
| **5.0** | **75** | **0.0116** | **0.006** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
littletramp/Qwen2.5-HumanLike-DPO
|
littletramp
| 2025-09-23T07:29:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T07:27:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haihp02/610aa6e3-0b90-4291-a7c7-6225f8e1ba90
|
haihp02
| 2025-09-23T07:27:27Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:41:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
itslabib/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_bold_chinchilla
|
itslabib
| 2025-09-23T07:27:15Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am agile_bold_chinchilla",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T19:26:21Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am agile_bold_chinchilla
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ZYXue/2025_09_22_23_07_32_PDT
|
ZYXue
| 2025-09-23T07:19:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:11:38Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: 2025_09_22_23_07_32_PDT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 2025_09_22_23_07_32_PDT
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZYXue/2025_09_22_23_07_32_PDT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
charhuggingface/Char-Replicate
|
charhuggingface
| 2025-09-23T07:17:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T06:48:41Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Char
---
# Char Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Char` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Char",
"lora_weights": "https://huggingface.co/charhuggingface/Char-Replicate/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('charhuggingface/Char-Replicate', weight_name='lora.safetensors')
image = pipeline('Char').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2014
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/charhuggingface/Char-Replicate/discussions) to add images that show off what you’ve made with this LoRA.
|
Hyeji0101/llama-orpo-rora-1epoch
|
Hyeji0101
| 2025-09-23T07:14:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:14:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xyy121214/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_hibernating_porpoise
|
xyy121214
| 2025-09-23T07:11:18Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am arctic_hibernating_porpoise",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T13:19:49Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am arctic_hibernating_porpoise
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Pi-1905/Qwen3-1.7B-dolly-lora
|
Pi-1905
| 2025-09-23T07:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:09:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zzzAI19/puellulaxl1024
|
zzzAI19
| 2025-09-23T07:02:43Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-08-22T13:24:42Z |
animagine xl 4.0ベースでファインチューニングしました。
---
license: openrail++
---
|
Youseff1987/qwen3-4b-instruct-2507-bnb-4bit-2509-merged
|
Youseff1987
| 2025-09-23T06:59:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-23T06:58:49Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Youseff1987
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758610340
|
poolkiltzn
| 2025-09-23T06:53:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T06:53:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dennohpeter/wav2vec2-large-xlsr-53-1e-sw-asr
|
dennohpeter
| 2025-09-23T06:50:16Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-22T05:23:28Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-1e-sw-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: sw
split: test
args: sw
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-1e-sw-asr
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0058
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:---:|
| 12.4128 | 0.2753 | 400 | 4.8015 | 1.0 |
| 3.6775 | 0.5506 | 800 | 3.1280 | 1.0 |
| 3.0093 | 0.8259 | 1200 | 3.0058 | 1.0 |
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 3.6.0
- Tokenizers 0.22.0
|
Huyle2501/SmolLM2-NewsGen1.0
|
Huyle2501
| 2025-09-23T06:46:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:46:22Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- generated_from_trainer
model-index:
- name: SmolLM2-NewsGen1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolLM2-NewsGen1.0
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4328 | 0.2133 | 400 | 3.2599 |
| 3.0927 | 0.4267 | 800 | 3.1225 |
| 2.9819 | 0.64 | 1200 | 3.0547 |
| 2.9125 | 0.8533 | 1600 | 3.0148 |
| 2.888 | 1.0667 | 2000 | 2.9898 |
| 2.8468 | 1.28 | 2400 | 2.9750 |
| 2.8264 | 1.4933 | 2800 | 2.9664 |
| 2.7999 | 1.7067 | 3200 | 2.9625 |
| 2.7925 | 1.92 | 3600 | 2.9616 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
OsamaKoll/blockassist
|
OsamaKoll
| 2025-09-23T06:41:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slender unseen sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T08:27:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slender unseen sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bieriszc/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_fanged_octopus
|
bieriszc
| 2025-09-23T06:36:34Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am majestic_fanged_octopus",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T05:55:05Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am majestic_fanged_octopus
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_split_0_2048_0.5
|
ChenWu98
| 2025-09-23T06:35:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:30:00Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_split_0_2048_0.5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_split_0_2048_0.5
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/5vafp8jb)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
simon-mellergaard/business-news-generator-smollm2-initial
|
simon-mellergaard
| 2025-09-23T06:32:51Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T10:51:01Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- generated_from_trainer
model-index:
- name: business-news-generator-smollm2-initial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# business-news-generator-smollm2-initial
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6526 | 0.32 | 200 | 3.9829 |
| 3.4161 | 0.64 | 400 | 3.8918 |
| 3.2717 | 0.96 | 600 | 3.8375 |
| 2.3323 | 1.28 | 800 | 3.9827 |
| 2.3346 | 1.6 | 1000 | 3.9801 |
| 2.3785 | 1.92 | 1200 | 3.9295 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Volko76/gemma-3n-E2B-it-litert-lm
|
Volko76
| 2025-09-23T06:31:26Z | 0 | 0 | null |
[
"license:gemma",
"region:us"
] | null | 2025-09-23T06:17:33Z |
---
license: gemma
---
credits to https://huggingface.co/google/gemma-3n-E2B-it-litert-lm
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_10_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:28:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:27:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
swvwr53e/gpt-4o-mini
|
swvwr53e
| 2025-09-23T06:28:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T06:28:56Z |
---
license: apache-2.0
---
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_24_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:26:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_2_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:26:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_6_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:26:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_22_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:26:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_28_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:26:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_4_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:26:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_12_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T00:46:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_0_4_all_37_0.001_5120_3
|
winnieyangwannan
| 2025-09-23T06:27:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T00:46:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
perfectblue/camilo
|
perfectblue
| 2025-09-23T06:27:13Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-23T05:45:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
langtech-innovation/Salamandra-7b_pre-1.3-160k_sft-2.0_openlicenses
|
langtech-innovation
| 2025-09-23T06:26:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T06:26:25Z |
---
license: apache-2.0
---
|
husjfry/blockassist-bc-climbing_pouncing_dragonfly_1758608322
|
husjfry
| 2025-09-23T06:21:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing pouncing dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T06:19:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing pouncing dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huynq2k4/yelp-t5-large-rag-decision
|
huynq2k4
| 2025-09-23T06:20:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:19:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DrClaw/Splade_PP_en_v2
|
DrClaw
| 2025-09-23T06:19:50Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"splade++",
"document-expansion",
"sparse representation",
"bag-of-words",
"passage-retrieval",
"knowledge-distillation",
"document encoder",
"sparse-encoder",
"sparse",
"splade",
"feature-extraction",
"en",
"dataset:ms_marco",
"arxiv:2205.04733",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-23T06:06:19Z |
---
license: apache-2.0
language:
- en
datasets:
- ms_marco
tags:
- splade++
- document-expansion
- sparse representation
- bag-of-words
- passage-retrieval
- knowledge-distillation
- document encoder
- sparse-encoder
- sparse
- splade
pretty_name: Independent Implementation of SPLADE++ Model with some efficiency tweaks for Industry setting.
library_name: sentence-transformers
pipeline_tag: feature-extraction
---
<center>
<img src="./dost_logo.png" alt="DonkeyStereotype" width="350px">
<p> Trained by <a href="https://donkeystereotype.com"/>Donkey Stereotype</p>
</center>
<br><br>
# Independent Implementation of SPLADE++ Model (`a.k.a splade-cocondenser* and family`) for the Industry setting.
--------------
This work stands on the shoulders of 2 robust researches: [Naver's From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective paper](https://arxiv.org/pdf/2205.04733.pdf) and [Google's SparseEmbed](https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/79f16d3b3b948706d191a7fe6dd02abe516f5564.pdf).
Props to both the teams for such a robust work.
**This is a 2nd iteration in this series. Try V1 here:** [prithivida/Splade_PP_en_v1](https://huggingface.co/prithivida/Splade_PP_en_v1)
## 1. What are Sparse Representations and Why learn one?
**Beginner ?** expand this. **Expert in Sparse & Dense representations ?** feel free skip to next section 2,
<details>
**1. Lexical search:**
Lexical search with BOW based sparse vectors are strong baselines, but they famously suffer from vocabulary mismatch problem, as they can only do exact term matching. Here are the pros and cons:
- ✅ Efficient and Cheap.
- ✅ No need to fine-tune models.
- ✅️ Interpretable.
- ✅️ Exact Term Matches.
- ❌ Vocabulary mismatch (Need to remember exact terms)
**2. Semantic Search:**
Learned Neural / Dense retrievers (DPR, Sentence transformers*, BGE* models) with approximate nearest neighbors search has shown impressive results. Here are the pros and cons:
- ✅ Search how humans innately think.
- ✅ When finetuned beats sparse by long way.
- ✅ Easily works with Multiple modals.
- ❌ Suffers token amnesia (misses term matching),
- ❌ Resource intensive (both index & retreival),
- ❌ Famously hard to interpret.
- ❌ Needs fine-tuning for OOD data.
**3. The big idea:**
Getting pros of both searches made sense and that gave rise to interest in learning sparse representations for queries and documents with some interpretability. The sparse representations also double as implicit or explicit (latent, contextualized) expansion mechanisms for both query and documents. If you are new to query expansion learn more here from the master himself Daniel Tunkelang.
**4. What a Sparse model learns ?**
The model learns to project it's learned dense representations over a MLM head to give a vocabulary distribution. Which is just to say the model can do automatic token expansion. (Image courtesy of pinecone)
<img src="./expansion.png" width=600 height=550/>
</details>
## **[Skip to "HOW TO USE with POPULAR VECTORDBs and more"](#htu) or continue for more details.**
## 2. Motivation:
SPLADE models are a fine balance between retrieval effectiveness (quality) and retrieval efficiency (latency and $), with that in mind we did **very minor retrieval efficiency tweaks** to make it more suitable for a industry setting.
*(Pure MLE folks should not conflate efficiency to model inference efficiency. Our main focus is on retrieval efficiency. Hereinafter efficiency is a short hand for retrieval efficiency unless explicitly qualified otherwise. Not that inference efficiency is not important, we will address that subsequently.)*
**TL;DR of Our attempt & results**
1. FLOPS tuning: Seperate **Seq lens and Severely restrictive FLOPs schedule and token budget** doc(128) & query(24) NOT 256 unlike Official SPLADE++. Inspired from **SparseEmbed**
3. Init Weights: **Middle Trained bert-base-uncased with MLM Loss**. Some corpus awarness like Official splade++ / ColBERT
4. Yet achieves competitive effectiveness of MRR@10 **37.8** in ID data (& OOD 49.4) and a retrieval latency of - **48.81ms**. (multi-threaded) all On **Consumer grade-GPUs** with **only 5 negatives per query**.
4. For Industry setting: Effectiveness on custom domains needs more than just **Trading FLOPS for tiny gains** and The Premise "SPLADE++ are not well suited to mono-cpu retrieval" does not hold.
5. Owing to query-time inference latency we still need 2 models one for query & doc, This is a Doc model and Query model will be **released soon.**
<img src="./ID.png" width=750 height=650/>
*Note: The paper refers to the best performing models as SPLADE++, hence for consistency we are reusing the same.*
<br/>
## 3. Why FLOPS is one of the key metrics for industry setting ?
<details>
While ONLY a empirical analysis on large sample make sense here is a spot checking - a qualitatively example to give you an idea. Our models achieve par competitive effectiveness with **~4% and ~48%, lesser tokens comparable SPLADE++ models including SoTA**.
(We will show Quantitative results in the next section.)
So, **by design "how to beat SoTA MRR?" was never our goal**, Instead "At what cost can we achieve an acceptable effectiveness i.e. MRR@10". Non-chalantly reducing lambda values (λQ,λD, see above table) will achieve a better MRR.
But Lower lambda values = Higher FLOPS = More tokens = Poorer efficiency. This is NOT desirable for a Industry setting.
**Ours**
```python
number of actual dimensions: 121
SPLADE BOW rep:
[('stress', 2.42), ('thermal', 2.31), ('glass', 2.27), ('pan', 1.78), ('heat', 1.66), ('glasses', 1.58), ('crack', 1.42), ('anxiety', 1.36), ('break', 1.31), ('window', 0.91), ('heating', 0.84), ('hot', 0.82), ('adjacent', 0.82), ('hotter', 0.82), ('if', 0.75), ('cause', 0.7), ('caused', 0.7), ('create', 0.7), ('factors', 0.69), ('created', 0.68), ('cracks', 0.67), ('breaks', 0.67), ('area', 0.66), ('##glass', 0.66), ('cracked', 0.63), ('areas', 0.6), ('cracking', 0.59), ('windows', 0.58), ('effect', 0.56), ('causes', 0.56), ('ruin', 0.54), ('severe', 0.54), ('too', 0.53), ('flame', 0.5), ('collapse', 0.49), ('stresses', 0.49), ('or', 0.48), ('physics', 0.47), ('temperature', 0.46), ('get', 0.46), ('heated', 0.45), ('problem', 0.45), ('energy', 0.44), ('hottest', 0.42), ('phenomenon', 0.42), ('sweating', 0.41), ('insulation', 0.39), ('level', 0.39), ('warm', 0.39), ('governed', 0.38), ('formation', 0.37), ('failure', 0.35), ('frank', 0.34), ('cooling', 0.32), ('fracture', 0.31), ('because', 0.31), ('crystal', 0.31), ('determined', 0.31), ('boiler', 0.31), ('mechanical', 0.3), ('shatter', 0.29), ('friction', 0.29), ('levels', 0.29), ('cold', 0.29), ('will', 0.29), ('ceramics', 0.29), ('factor', 0.28), ('crash', 0.28), ('reaction', 0.28), ('fatigue', 0.28), ('hazard', 0.27), ('##e', 0.26), ('anger', 0.26), ('bubble', 0.25), ('process', 0.24), ('cleaning', 0.23), ('surrounding', 0.22), ('theory', 0.22), ('sash', 0.22), ('distraction', 0.21), ('adjoining', 0.19), ('environmental', 0.19), ('ross', 0.18), ('formed', 0.17), ('broken', 0.16), ('affect', 0.16), ('##pan', 0.15), ('graphic', 0.14), ('damage', 0.14), ('bubbles', 0.13), ('windshield', 0.13), ('temporal', 0.13), ('roof', 0.12), ('strain', 0.12), ('clear', 0.09), ('ceramic', 0.08), ('stressed', 0.08), ('##uation', 0.08), ('cool', 0.08), ('expand', 0.07), ('storm', 0.07), ('shock', 0.07), ('psychological', 0.06), ('breaking', 0.06), ('##es', 0.06), ('melting', 0.05), ('burst', 0.05), ('sensing', 0.04), ('heats', 0.04), ('error', 0.03), ('weather', 0.03), ('drink', 0.03), ('fire', 0.03), ('vibration', 0.02), ('induced', 0.02), ('warmer', 0.02), ('leak', 0.02), ('fog', 0.02), ('safety', 0.01), ('surface', 0.01), ('##thermal', 0.0)]
```
**naver/splade-cocondenser-ensembledistil** (SoTA, ~4% more tokens + FLOPS = 1.85)
```python
number of actual dimensions: 126
SPLADE BOW rep:
[('stress', 2.25), ('glass', 2.23), ('thermal', 2.18), ('glasses', 1.65), ('pan', 1.62), ('heat', 1.56), ('stressed', 1.42), ('crack', 1.31), ('break', 1.12), ('cracked', 1.1), ('hot', 0.93), ('created', 0.9), ('factors', 0.81), ('broken', 0.73), ('caused', 0.71), ('too', 0.71), ('damage', 0.69), ('if', 0.68), ('hotter', 0.65), ('governed', 0.61), ('heating', 0.59), ('temperature', 0.59), ('adjacent', 0.59), ('cause', 0.58), ('effect', 0.57), ('fracture', 0.56), ('bradford', 0.55), ('strain', 0.53), ('hammer', 0.51), ('brian', 0.48), ('error', 0.47), ('windows', 0.45), ('will', 0.45), ('reaction', 0.42), ('create', 0.42), ('windshield', 0.41), ('heated', 0.41), ('factor', 0.4), ('cracking', 0.39), ('failure', 0.38), ('mechanical', 0.38), ('when', 0.38), ('formed', 0.38), ('bolt', 0.38), ('mechanism', 0.37), ('warm', 0.37), ('areas', 0.36), ('area', 0.36), ('energy', 0.34), ('disorder', 0.33), ('barry', 0.33), ('shock', 0.32), ('determined', 0.32), ('gage', 0.32), ('sash', 0.31), ('theory', 0.31), ('level', 0.31), ('resistant', 0.31), ('brake', 0.3), ('window', 0.3), ('crash', 0.3), ('hazard', 0.29), ('##ink', 0.27), ('ceramic', 0.27), ('storm', 0.25), ('problem', 0.25), ('issue', 0.24), ('impact', 0.24), ('fridge', 0.24), ('injury', 0.23), ('ross', 0.22), ('causes', 0.22), ('affect', 0.21), ('pressure', 0.21), ('fatigue', 0.21), ('leak', 0.21), ('eye', 0.2), ('frank', 0.2), ('cool', 0.2), ('might', 0.19), ('gravity', 0.18), ('ray', 0.18), ('static', 0.18), ('collapse', 0.18), ('physics', 0.18), ('wave', 0.18), ('reflection', 0.17), ('parker', 0.17), ('strike', 0.17), ('hottest', 0.17), ('burst', 0.16), ('chance', 0.16), ('burn', 0.14), ('rubbing', 0.14), ('interference', 0.14), ('bailey', 0.13), ('vibration', 0.12), ('gilbert', 0.12), ('produced', 0.12), ('rock', 0.12), ('warmer', 0.11), ('get', 0.11), ('drink', 0.11), ('fireplace', 0.11), ('ruin', 0.1), ('brittle', 0.1), ('fragment', 0.1), ('stumble', 0.09), ('formation', 0.09), ('shatter', 0.08), ('great', 0.08), ('friction', 0.08), ('flash', 0.07), ('cracks', 0.07), ('levels', 0.07), ('smash', 0.04), ('fail', 0.04), ('fra', 0.04), ('##glass', 0.03), ('variables', 0.03), ('because', 0.02), ('knock', 0.02), ('sun', 0.02), ('crush', 0.01), ('##e', 0.01), ('anger', 0.01)]
```
**naver/splade-v2-distil** (~48% more tokens + FLOPS = 3.82)
```python
number of actual dimensions: 234
SPLADE BOW rep:
[('glass', 2.55), ('stress', 2.39), ('thermal', 2.38), ('glasses', 1.95), ('stressed', 1.87), ('crack', 1.84), ('cool', 1.78), ('heat', 1.62), ('pan', 1.6), ('break', 1.53), ('adjacent', 1.44), ('hotter', 1.43), ('strain', 1.21), ('area', 1.16), ('adjoining', 1.14), ('heated', 1.11), ('window', 1.07), ('stresses', 1.04), ('hot', 1.03), ('created', 1.03), ('create', 1.03), ('cause', 1.02), ('factors', 1.02), ('cooler', 1.01), ('broken', 1.0), ('too', 0.99), ('fracture', 0.96), ('collapse', 0.96), ('cracking', 0.95), ('great', 0.93), ('happen', 0.93), ('windows', 0.89), ('broke', 0.87), ('##e', 0.87), ('pressure', 0.84), ('hottest', 0.84), ('breaking', 0.83), ('govern', 0.79), ('shatter', 0.76), ('level', 0.75), ('heating', 0.69), ('temperature', 0.69), ('cracked', 0.69), ('panel', 0.68), ('##glass', 0.68), ('ceramic', 0.67), ('sash', 0.66), ('warm', 0.66), ('areas', 0.64), ('creating', 0.63), ('will', 0.62), ('tension', 0.61), ('cracks', 0.61), ('optical', 0.6), ('mechanism', 0.58), ('kelly', 0.58), ('determined', 0.58), ('generate', 0.58), ('causes', 0.56), ('if', 0.56), ('factor', 0.56), ('the', 0.56), ('chemical', 0.55), ('governed', 0.55), ('crystal', 0.55), ('strike', 0.55), ('microsoft', 0.54), ('creates', 0.53), ('than', 0.53), ('relation', 0.53), ('glazed', 0.52), ('compression', 0.51), ('painting', 0.51), ('governing', 0.5), ('harden', 0.49), ('solar', 0.48), ('reflection', 0.48), ('ic', 0.46), ('split', 0.45), ('mirror', 0.44), ('damage', 0.43), ('ring', 0.42), ('formation', 0.42), ('wall', 0.41), ('burst', 0.4), ('radiant', 0.4), ('determine', 0.4), ('one', 0.4), ('plastic', 0.39), ('furnace', 0.39), ('difference', 0.39), ('melt', 0.39), ('get', 0.39), ('contract', 0.38), ('forces', 0.38), ('gets', 0.38), ('produce', 0.38), ('surrounding', 0.37), ('vibration', 0.37), ('tile', 0.37), ('fail', 0.36), ('warmer', 0.36), ('rock', 0.35), ('fault', 0.35), ('roof', 0.34), ('burned', 0.34), ('physics', 0.33), ('welding', 0.33), ('why', 0.33), ('a', 0.32), ('pop', 0.32), ('and', 0.31), ('fra', 0.3), ('stat', 0.3), ('withstand', 0.3), ('sunglasses', 0.3), ('material', 0.29), ('ice', 0.29), ('generated', 0.29), ('matter', 0.29), ('frame', 0.28), ('elements', 0.28), ('then', 0.28), ('.', 0.28), ('pont', 0.28), ('blow', 0.28), ('snap', 0.27), ('metal', 0.26), ('effect', 0.26), ('reaction', 0.26), ('related', 0.25), ('aluminium', 0.25), ('neighboring', 0.25), ('weight', 0.25), ('steel', 0.25), ('bulb', 0.25), ('tear', 0.25), ('coating', 0.25), ('plumbing', 0.25), ('co', 0.25), ('microwave', 0.24), ('formed', 0.24), ('pipe', 0.23), ('drink', 0.23), ('chemistry', 0.23), ('energy', 0.22), ('reflect', 0.22), ('dynamic', 0.22), ('leak', 0.22), ('is', 0.22), ('lens', 0.21), ('frost', 0.21), ('lenses', 0.21), ('produced', 0.21), ('induced', 0.2), ('arise', 0.2), ('plate', 0.2), ('equations', 0.19), ('affect', 0.19), ('tired', 0.19), ('mirrors', 0.18), ('thickness', 0.18), ('bending', 0.18), ('cabinet', 0.17), ('apart', 0.17), ('##thermal', 0.17), ('gas', 0.17), ('equation', 0.17), ('relationship', 0.17), ('composition', 0.17), ('engineering', 0.17), ('block', 0.16), ('breaks', 0.16), ('when', 0.16), ('definition', 0.16), ('collapsed', 0.16), ('generation', 0.16), (',', 0.16), ('philips', 0.16), ('later', 0.15), ('wood', 0.15), ('neighbouring', 0.15), ('structural', 0.14), ('regulate', 0.14), ('neighbors', 0.13), ('lighting', 0.13), ('happens', 0.13), ('more', 0.13), ('property', 0.13), ('cooling', 0.12), ('shattering', 0.12), ('melting', 0.12), ('how', 0.11), ('cloud', 0.11), ('barriers', 0.11), ('lam', 0.11), ('conditions', 0.11), ('rule', 0.1), ('insulation', 0.1), ('bathroom', 0.09), ('convection', 0.09), ('cavity', 0.09), ('source', 0.08), ('properties', 0.08), ('bend', 0.08), ('bottles', 0.08), ('ceramics', 0.07), ('temper', 0.07), ('tense', 0.07), ('keller', 0.07), ('breakdown', 0.07), ('concrete', 0.07), ('simon', 0.07), ('solids', 0.06), ('windshield', 0.05), ('eye', 0.05), ('sunlight', 0.05), ('brittle', 0.03), ('caused', 0.03), ('suns', 0.03), ('floor', 0.02), ('components', 0.02), ('photo', 0.02), ('change', 0.02), ('sun', 0.01), ('crystals', 0.01), ('problem', 0.01), ('##proof', 0.01), ('parameters', 0.01), ('gases', 0.0), ('prism', 0.0), ('doing', 0.0), ('lattice', 0.0), ('ground', 0.0)]
```
- *Note 1: This specific passage was used as an example for [ease of comparison](https://github.com/naver/splade/blob/main/inference_splade.ipynb)*
</details>
## 4. How does it translate into Empirical metrics?
Our models are token sparse and yet effective. It translates to faster retrieval (User experience) and smaller index size ($). Mean retrieval time on the standard MS-MARCO small dev set and Scaled total FLOPS loss are the respective metrics are below.
This is why Google's SparseEmbed is interesting as they also achieve SPLADE quality retrieval effectiveness with much lower FLOPs. Compared to ColBERT, SPLADE and SparseEmbed match query and
document terms with a linear complexity as ColBERT’s late interaction i.e. all query-document term pairs takes a quadratic complexity. The Challenge with SparseEmbed is it uses a hyperparameter called **Top-k to restrict number of tokens used to learn contextual dense representations.** Say 64 and 256 tokens for query and passage encoding.
But it is unclear how well these hyperparameters are transferable to other domains or languages (where the notion of tokens changes a lot like our mother tongue Tamil which is Agglutinative in nature).
<img src="./Metrics.png" width=1000/>
<details>
**Note: Why Anserini not PISA?** *Anserini is a production ready lucene based library. Common industry search deployments use Solr or elastic which are lucene based, hence the performance can be comparable. PISA latency is irrelevant for industry as it is a a research only system.*
The full [anserini evaluation log will be updated soon]() with encoding, indexing and querying details are here.
- **BEIR ZST OOD performance**: Will be added to the end of page.
**Our model is different in few more aspects**
- **Cocondenser Weights**: Unlike the best Official SPLADE++ or SparseEmbed we do NOT initialse weights from Luyu/co-condenser* models. Yet we achieve CoCondenser SPLADE level performance. More on this later.
- **Same size models:** Official SPLADE++, SparseEmbed and Ours all finetune on the same size based model. Size of `bert-base-uncased`.
</details>
## 5. Roadmap and future directions for Industry Suitability.
- **Improve efficiency**: This is a bottomless pit, Will continue to improve serving and retrieval efficiency.
- **Custom/Domain Finetuning**: OOD Zeroshot performance of SPLADE models is great but unimportant in the industry setting as we need the ability to finetune on custom datasets or domains. Finetuning SPLADE on a new dataset is not cheap and needs labelling of queries and passages.
So we will continue to see how we can enable economically finetuning our recipe on custom datasets without expensive labelling.
- **Multilingual SPLADE**: Training cost of SPLADE i.e (GPU budget) directly proportional to Vocab size of the base model, So Mulitlingual SPLADE either using mbert or XLMR can be expensive as they have
120K and 250K vocab as opposed to 30K as in bert-base-uncased. We will continue to research to see how best we can extend our recipe to the multilingual world.
## 6. Usage
To enable a light weight inference solution without heavy **No Torch dependency** we will also release a library - **SPLADERunner**
Ofcourse if it doesnt matter you could always use these models with Huggingface transformers library.
<h1 id="htu">How to use? </h1>
## 6a. With Popular VectorDBs
| VectorDB | Colab Link |
|----------|------------|
| Pinecone | [](https://colab.research.google.com/drive/1fB6LheD9wYG0G-nBHiz0z2juvljrsBum?usp=sharing) |
| Qdrant | TBD |
## 6b. With SPLADERunner Library
[SPLADERunner Library](https://github.com/PrithivirajDamodaran/SPLADERunner)
```python
pip install spladerunner
#One-time init
from spladerunner import Expander
# Default model is the document expander.
exapander = Expander()
#Sample Document expansion
sparse_rep = expander.expand(
["The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science."])
```
## 6c. With Sentence Transformers
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("prithivida/Splade_PP_en_v2")
# Run inference
sentence = [
"The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science."
]
embeddings = model.encode(sentence)
print(embeddings.shape)
# [1, 30522]
decoded_sentence = model.decode(embeddings[0])
print(f"Number of actual dimensions: {len(decoded_sentence)}")
decoded_sentence_rounded = [(token, round(score, 2)) for token, score in decoded_sentence]
print("SPLADE BOW rep:\n", decoded_sentence_rounded)
# Number of actual dimensions: 103
# SPLADE BOW rep:
# [('manhattan', 2.59), ('project', 2.1), ('atomic', 1.65), ('legacy', 1.62), ('bomb', 1.5), ('peaceful', 1.47), ('end', 1.42), ('helped', 1.37), ('wwii', 1.36), ('energy', 1.36), ('war', 1.29), ('1942', 1.29), ('bring', 1.21), ('impact', 1.14),
# ('help', 1.09), ('bombs', 1.05), ('ny', 0.93), ('scientist', 0.91), ('nuclear', 0.89), ('history', 0.87), ('projects', 0.87), ('mission', 0.83), ('stop', 0.77), ('wars', 0.76), ('peace', 0.76), ('ii', 0.76), ('affect', 0.76), ('power', 0.73),
# ('science', 0.72), ('bombing', 0.72), ('atom', 0.72), ('use', 0.7), ('did', 0.69), ('brought', 0.67), ('still', 0.66), ('purpose', 0.65), ('was', 0.65), ('effect', 0.59), ('scientists', 0.59), ('uses', 0.57), ('because', 0.53), ('historical', 0.48),
# ('experiment', 0.47), ('scientific', 0.47), ('safe', 0.46), ('w', 0.45), ('message', 0.44), ('##w', 0.42), ('ended', 0.41), ('hudson', 0.39), ('roosevelt', 0.38), ('were', 0.36), ('##nik', 0.35), ('continue', 0.34), ('hiroshima', 0.33), ('important', 0.33),
# ('benefit', 0.32), ('destruction', 0.31), ('used', 0.3), ('nazi', 0.3), ('destroyed', 0.29), ('story', 0.29), ('assisted', 0.27), ('close', 0.27), ('influenced', 0.25), ('world', 0.25), ('invented', 0.24), ('contribution', 0.24), ('military', 0.24), ('conflict', 0.22),
# ('1939', 0.22), ('success', 0.22), ('1940s', 0.21), ('nasa', 0.2), ('harry', 0.2), ('revolution', 0.2), ('today', 0.18), ('rescue', 0.17), ('radiation', 0.16), ('destiny', 0.16), ('last', 0.15), ('allies', 0.14), ('the', 0.14), ('created', 0.13), ('hess', 0.13), ('weapon', 0.13),
# ('started', 0.11), ('us', 0.1), ('secret', 0.1), ('campaign', 0.09), ('2', 0.08), ('cause', 0.08), ('and', 0.07), ('propaganda', 0.06), ('noah', 0.05), ('theory', 0.05), ('significance', 0.02), ('berlin', 0.01), ('fuel', 0.01), ('columbia', 0.01), ('strategy', 0.01), ('usage', 0.01), ('symbol', 0.0)]
```
## 6d. With HuggingFace
**NOTEBOOK user? Login first**
```
!huggingface-cli login
```
**Integrating in your code ?**
[How to use HF tokens in code](https://huggingface.co/docs/hub/en/security-tokens)
Make these changes
```
tokenizer = AutoTokenizer.from_pretrained('prithivida/Splade_PP_en_v1', token=<Your token>)
model = AutoModelForMaskedLM.from_pretrained('prithivida/Splade_PP_en_v1', token=<Your token>)
```
**Full code**
```python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained('prithivida/Splade_PP_en_v1')
reverse_voc = {v: k for k, v in tokenizer.vocab.items()}
model = AutoModelForMaskedLM.from_pretrained('prithivida/Splade_PP_en_v1')
model.to(device)
sentence = """The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science."""
inputs = tokenizer(sentence, return_tensors='pt')
inputs = {key: val.to(device) for key, val in inputs.items()}
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
outputs = model(**inputs)
logits, attention_mask = outputs.logits, attention_mask
relu_log = torch.log(1 + torch.relu(logits))
weighted_log = relu_log * attention_mask.unsqueeze(-1)
max_val, _ = torch.max(weighted_log, dim=1)
vector = max_val.squeeze()
cols = vector.nonzero().squeeze().cpu().tolist()
print("number of actual dimensions: ", len(cols))
weights = vector[cols].cpu().tolist()
d = {k: v for k, v in zip(cols, weights)}
sorted_d = {k: v for k, v in sorted(d.items(), key=lambda item: item[1], reverse=True)}
bow_rep = []
for k, v in sorted_d.items():
bow_rep.append((reverse_voc[k], round(v,2)))
print("SPLADE BOW rep:\n", bow_rep)
```
## BEIR Zeroshot OOD performance:
<img src="./splade_v2.png" width=100%/>
## Training details:
T.B.D
## Acknowledgements
- Thanks to Nils Reimers for all the inputs.
- Thanks to authors of the Anserini library.
## Limitations and bias
All limitations and biases of the BERT model applies to finetuning effort.
## Citation
Please cite if you use our models or libraries. Citation info below.
```
Damodaran, P. (2024). Splade_PP_en_v2: Independent Implementation of SPLADE++ Model (`a.k.a splade-cocondenser* and family`) for the Industry setting. (Version 2.0.0) [Computer software].
```
|
HenryHYH/wine_v10_other_model
|
HenryHYH
| 2025-09-23T06:19:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:19:27Z |
---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HenryHYH
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pandoradox/qwen2.5-1.5b-instruct_stressstrain_350
|
pandoradox
| 2025-09-23T06:16:35Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T03:46:09Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct
- grpo
- lora
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
pandoradox/qwen2.5-1.5b-instruct_oscillator1_300
|
pandoradox
| 2025-09-23T06:16:16Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T03:43:55Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct
- grpo
- lora
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
superagent-ai/superagent-lm-20b
|
superagent-ai
| 2025-09-23T06:16:14Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:superagent-ai/superagent-lm",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T19:35:10Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
datasets:
- superagent-ai/superagent-lm
---
# superagent-lm-20b
A 20B parameter GPT-OSS model that has been instruction-tuned by Superagent using Unsloth's fast finetuning stack. The repository ships both the Transformer configuration and an 8-bit GGUF export for efficient CPU/GPU inference.
- **Developed by:** superagent-ai
- **License:** apache-2.0
- **Finetuned from:** `unsloth/gpt-oss-20b-unsloth-bnb-4bit`
This model inherits GPT-OSS' long-context MoE architecture (131k context, sliding attention) and keeps the chat-style template shown below.
## Repository Contents
- `config.json` – Transformer configuration compatible with 🤗 Transformers.
- `superagent_lm_finetue.Q8_0.gguf` – Q8_0 quantized weights for llama.cpp / llama-cpp-python.
- `template` – Chat template used when formatting conversations.
- `params` – Recommended default generation parameters.
## Quickstart
### 🤗 Transformers (PyTorch)
```bash
pip install --upgrade transformers accelerate bitsandbytes sentencepiece
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "superagent-ai/superagent-lm-20b"
# trust_remote_code is required because GPT-OSS registers a custom architecture
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "List three tips for deploying open-source LLMs."},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### 🦙 llama.cpp / GGUF
```bash
pip install --upgrade huggingface-hub
huggingface-cli download superagent-ai/superagent-lm-20b-gguf \
superagent_lm_finetue.Q8_0.gguf \
--local-dir .
# Build llama.cpp as usual
cmake -S . -B build
cmake --build build -j
# Run inference with the quantized GGUF
./build/bin/main \
-m ./superagent_lm_finetue.Q8_0.gguf \
-n 256 \
--temp 0.7 \
--top-p 0.9 \
--prompt "You are a helpful AI assistant.\n\nUser: Explain the advantages of GGUF quantization.\nAssistant:"
```
### 🐍 llama-cpp-python
```bash
pip install --upgrade llama-cpp-python huggingface-hub
```
```python
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
model_path = hf_hub_download(
repo_id="superagent-ai/superagent-lm-20b-gguf",
filename="superagent_lm_finetue.Q8_0.gguf",
)
llm = Llama(
model_path=model_path,
n_ctx=32768,
n_gpu_layers=-1, # set to 0 for CPU-only
)
prompt = (
"You are a helpful AI assistant.\n\n"
"User: Give me two creative taglines for an open-source LLM platform.\n"
"Assistant:"
)
output = llm(
prompt,
max_tokens=256,
temperature=0.7,
top_p=0.9,
stop=["<|endoftext|>", "<|return|>"]
)
print(output["choices"][0]["text"].strip())
```
## Chat Template
When building prompts manually, follow the template stored in `template` (ChatML-style with optional reasoning tags). With 🤗 Transformers the template is applied automatically via `tokenizer.apply_chat_template`. For llama.cpp include the stop sequences shown in `params`.
## Evaluation
The model has not yet been benchmarked post-finetune. We recommend validating on your downstream task before deploying.
## Responsible Use
This model can hallucinate facts, reproduce biases from its training data, and should not be used for high-stakes decisions without human oversight. Always review the output in production environments.
## License
Released under the Apache 2.0 license. Downstream users must comply with the base model license and any applicable data usage restrictions.
|
rayonlabs/benchmark-2f9f3371-6ee9-4251-a114-b4386d462056-tourn_c78d225c003e6293_20250920-5GU4Xkd3
|
rayonlabs
| 2025-09-23T06:15:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/gpt-oss-20b",
"base_model:adapter:openai/gpt-oss-20b",
"region:us"
] | null | 2025-09-23T06:15:53Z |
---
base_model: openai/gpt-oss-20b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
husjfry/blockassist-bc-climbing_pouncing_dragonfly_1758607964
|
husjfry
| 2025-09-23T06:15:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing pouncing dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T06:13:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing pouncing dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
inclusionAI/Ring-flash-2.0
|
inclusionAI
| 2025-09-23T06:14:50Z | 114 | 60 |
transformers
|
[
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"base_model:inclusionAI/Ling-flash-base-2.0",
"base_model:finetune:inclusionAI/Ling-flash-base-2.0",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-19T07:11:36Z |
---
license: mit
base_model:
- inclusionAI/Ling-flash-base-2.0
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are officially open-sourcing Ring-flash-2.0.
This is a __high-performance thinking model, deeply optimized__ based on Ling-flash-2.0-base. Like Ling-flash-2.0, Ring-flash-2.0 has a total of 100B parameters, with only 6.1B activated per inference. Our independently developed __icepop algorithm__ has successfully addressed the challenge of training instability in reinforcement learning (RL) for MoE LLMs after cold-start Long-CoT SFT, enabling the model’s complex reasoning capabilities to continuously improve throughout extended RL training cycles.
Ring-flash-2.0 demonstrates significant breakthroughs across multiple challenging benchmarks, including __math competitions__, __code generation__, and __logical reasoning__. Its performance not only surpasses that of SOTA dense models under 40B parameters but also rivals larger open-weight MoE models and closed-source high-performance thinking model APIs.
### leading-level performance in complex reasoning
We selected __representative open-source thinking models__ and __closed-source APIs__ for comparison, including GPT-OSS-120B(medium), Qwen3-32B-Thinking, Seed-OSS-36B-Instruct, and Gemini-2.5-Flash.
The benchmarking results demonstrate that Ring-flash-2.0 exhibits leading performance across multiple challenging general reasoning tasks, including:
- __Math competitions__ (AIME 25, Omni-MATH),
- __Code generation__ (LiveCodeBench, CodeForce-Elo),
- __Logical reasoning__ (ARC-Prize).
It also shows strong competitiveness in specialized domains such as:
- __Scientific and medical reasoning__ (GPQA-Diamond, HealthBench).
More surprisingly, although Ring-flash-2.0 is primarily designed for complex reasoning, it outperforms all other compared models in __creative writing__ (Creative Writing v3) and matches the creative capability of its "twin brother"—the non-thinking model Ling-flash-2.0.
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*jLbeS74JqB8AAAAAWmAAAAgAemJ7AQ/original"/>
<p>
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*_AG2T62ZWNsAAAAAWKAAAAgAemJ7AQ/original"/>
<p>
### Efficient Architecture, High-Speed Inference
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*awCaS4yTD9UAAAAAUdAAAAgAemJ7AQ/original"/>
<p>
Building on the highly efficient MoE architecture of the Ling 2.0 series, and through structural optimizations such as a __1/32 expert activation ratio__ and __MTP layers__, Ring-flash-2.0 activates only 6.1B (4.8B non-embedding) parameters while delivering performance comparable to a ∼40B dense model.
Thanks to its low activation and high sparsity design, Ring-flash-2.0 achieves a high generation speed of __200+ tokens/sec__ when deployed on just four H20 GPUs, significantly reducing inference costs for thinking models in high-concurrency scenarios.
## IcePop: Cooling Down Training-Inference Gaps in RL for MoE Models
During the RL for MoE models, the discrepancy of precision between the training and inference engines is more pronounced compared to dense models. This gap widens progressively as sequence length and training steps increase—particularly during long-sequence generation and extended training cycles. A more critical issue is that the original GRPO algorithm begins to break down within a limited number of training steps. Specifically, the probabilistic discrepancy for the same token between training and inference phases gradually increases. When this relative difference exceeds 5%, training effectively fails, posing a significant challenge for long-horizon reinforcement learning with lengthy sequences.
To address this issue, we introduced a key solution: __distribution calibration via masked bidirectional truncation, which effectively narrows the gap between training and inference__.
- Bidirectional Truncation: We truncate not only tokens where the training probability is significantly higher than the inference probability but also the reverse scenario where the training probability is much lower.
- Masking: Tokens with excessively large discrepancies are excluded from gradient computation.
For detailed algorithm introduction, please refer to our technical blog: https://ringtech.notion.site/icepop
## SFT + RLVR + RLHF Multi-Stage Training
To comprehensively enhance the capabilities of Ring-flash-2.0, we designed a Two-staged RL pipeline. First, lightweight Long-CoT SFT equips the Ling-flash-2.0-base model with diverse thinking patterns. This is followed by RL training with Verifiable Rewards (RLVR) to continually stimulate the model’s reasoning potential. Finally, an RLHF phase is incorporated to improve the model’s general abilities.
During RL training, we compared directly combining RLVR and RLHF into joint training with the ultimately adopted Two-staged RL pipeline. Both approaches showed relatively similar effectiveness in our experiments. However, due to the differing difficulty levels of RLVR and RLHF tasks—with RLHF involving relatively shorter model rollouts—joint training resulted in more long-tail generations. From an engineering efficiency perspective, we ultimately adopted the Two-staged RL approach.
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4Q_4SbSv73YAAAAAQ6AAAAgAemJ7AQ/original"/>
<p>
## Quickstart
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ring-flash-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-flash-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ring-flash-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ring-flash-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ring](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ring-V2/blob/master/LICENSE).
|
melsiddieg/qwen3-4b-arud-full-880-v1
|
melsiddieg
| 2025-09-23T06:13:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:11:14Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** melsiddieg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mavewield/blockassist
|
mavewield
| 2025-09-23T06:11:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive agile yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-19T16:41:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive agile yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zai-org/GLM-4.5V
|
zai-org
| 2025-09-23T06:10:31Z | 34,795 | 652 |
transformers
|
[
"transformers",
"safetensors",
"glm4v_moe",
"image-text-to-text",
"conversational",
"zh",
"en",
"arxiv:2507.01006",
"base_model:zai-org/GLM-4.5-Air-Base",
"base_model:finetune:zai-org/GLM-4.5-Air-Base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-10T13:55:30Z |
---
base_model:
- zai-org/GLM-4.5-Air-Base
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
---
# GLM-4.5V
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
</div>
This model is part of the GLM-V family of models, introduced in the paper [GLM-4.1V-Thinking and GLM-4.5V: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning](https://huggingface.co/papers/2507.01006).
- **Paper**: [https://huggingface.co/papers/2507.01006](https://huggingface.co/papers/2507.01006)
- **GitHub Repository**: [https://github.com/zai-org/GLM-V/](https://github.com/zai-org/GLM-V/)
- **Online Demo**: [https://chat.z.ai/](https://chat.z.ai/)
- **API Access**: [ZhipuAI Open Platform](https://docs.z.ai/guides/vlm/glm-4.5v)
- **Desktop Assistant App**: [https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App)
- **Discord Community**: [https://discord.com/invite/8cnQKdAprg](https://discord.com/invite/8cnQKdAprg)
## Introduction & Model Overview
Vision-language models (VLMs) have become a key cornerstone of intelligent systems. As real-world AI tasks grow increasingly complex, VLMs urgently need to enhance reasoning capabilities beyond basic multimodal perception — improving accuracy, comprehensiveness, and intelligence — to enable complex problem solving, long-context understanding, and multimodal agents.
Through our open-source work, we aim to explore the technological frontier together with the community while empowering more developers to create exciting and innovative applications.
**This Hugging Face repository hosts the `GLM-4.5V` model, part of the `GLM-V` series.**
### GLM-4.5V
GLM-4.5V is based on ZhipuAI’s next-generation flagship text foundation model GLM-4.5-Air (106B parameters, 12B active). It continues the technical approach of GLM-4.1V-Thinking, achieving SOTA performance among models of the same scale on 42 public vision-language benchmarks. It covers common tasks such as image, video, and document understanding, as well as GUI agent operations.

Beyond benchmark performance, GLM-4.5V focuses on real-world usability. Through efficient hybrid training, it can handle diverse types of visual content, enabling full-spectrum vision reasoning, including:
- **Image reasoning** (scene understanding, complex multi-image analysis, spatial recognition)
- **Video understanding** (long video segmentation and event recognition)
- **GUI tasks** (screen reading, icon recognition, desktop operation assistance)
- **Complex chart & long document parsing** (research report analysis, information extraction)
- **Grounding** (precise visual element localization)
The model also introduces a **Thinking Mode** switch, allowing users to balance between quick responses and deep reasoning. This switch works the same as in the `GLM-4.5` language model.
### GLM-4.1V-9B
*Contextual information about GLM-4.1V-9B is provided for completeness, as it is part of the GLM-V series and foundational to GLM-4.5V's development.*
Built on the [GLM-4-9B-0414](https://github.com/zai-org/GLM-4) foundation model, the **GLM-4.1V-9B-Thinking** model introduces a reasoning paradigm and uses RLCS (Reinforcement Learning with Curriculum Sampling) to comprehensively enhance model capabilities. It achieves the strongest performance among 10B-level VLMs and matches or surpasses the much larger Qwen-2.5-VL-72B in 18 benchmark tasks.
We also open-sourced the base model **GLM-4.1V-9B-Base** to support researchers in exploring the limits of vision-language model capabilities.

Compared with the previous generation CogVLM2 and GLM-4V series, **GLM-4.1V-Thinking** brings:
1. The series’ first reasoning-focused model, excelling in multiple domains beyond mathematics.
2. **64k** context length support.
3. Support for **any aspect ratio** and up to **4k** image resolution.
4. A bilingual (Chinese/English) open-source version.
GLM-4.1V-9B-Thinking integrates the **Chain-of-Thought** reasoning mechanism, improving accuracy, richness, and interpretability. It leads on 23 out of 28 benchmark tasks at the 10B parameter scale, and outperforms Qwen-2.5-VL-72B on 18 tasks despite its smaller size.

## Project Updates
- 🔥 **News**: `2025/08/11`: We released **GLM-4.5V** with significant improvements across multiple benchmarks. We also open-sourced our handcrafted **desktop assistant app** for debugging. Once connected to GLM-4.5V, it can capture visual information from your PC screen via screenshots or screen recordings. Feel free to try it out or customize it into your own multimodal assistant. Click [here](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App) to download the installer or [build from source](https://github.com/zai-org/GLM-V/blob/main/examples/vllm-chat-helper/README.md)!
- **News**: `2025/07/16`: We have open-sourced the **VLM Reward System** used to train GLM-4.1V-Thinking. View the [code repository](https://github.com/zai-org/GLM-V/tree/main/glmv_reward) and run locally: `python examples/reward_system_demo.py`.
- **News**: `2025/07/01`: We released **GLM-4.1V-9B-Thinking** and its [technical report](https://arxiv.org/abs/2507.01006).
## Model Implementation Code
* GLM-4.5V model algorithm: see the full implementation in [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4v_moe).
* GLM-4.1V-9B-Thinking model algorithm: see the full implementation in [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4v).
* Both models share identical multimodal preprocessing, but use different conversation templates — please distinguish carefully.
## Usage
### Environment Installation
For `SGLang` and `transformers`:
```bash
pip install -r https://raw.githubusercontent.com/zai-org/GLM-V/main/requirements.txt
```
For `vLLM`:
```bash
pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
pip install transformers-v4.55.0-GLM-4.5V-preview
```
### Quick Start with Transformers
```python
from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
import torch
MODEL_PATH = "zai-org/GLM-4.5V"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
},
{
"type": "text",
"text": "describe this image"
}
],
}
]
processor = AutoProcessor.from_pretrained(MODEL_PATH)
model = Glm4vMoeForConditionalGeneration.from_pretrained(
pretrained_model_name_or_path=MODEL_PATH,
torch_dtype="auto",
device_map="auto",
)
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
inputs.pop("token_type_ids", None)
generated_ids = model.generate(**inputs, max_new_tokens=8192)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
print(output_text)
```
The special tokens `<|begin_of_box|>` and `<|end_of_box|>` in the response mark the answer’s bounding box in the image. The bounding box is given as four numbers — for example `[x1, y1, x2, y2]`, where `(x1, y1)` is the top-left corner and `(x2, y2`)` is the bottom-right corner. The bracket style may vary ([], [[]], (), <>, etc.), but the meaning is the same: it encloses the coordinates of the box. These coordinates are relative values between 0 and 1000, normalized to the image size.
For more code information, please visit our [GitHub](https://github.com/zai-org/GLM-V/).
### Grounding Example
GLM-4.5V equips precise grounding capabilities. Given a prompt that requests the location of a specific object, GLM-4.5V is able to reasoning step-by-step and identify the bounding boxes of the target object. The query prompt supports complex descriptions of the target object as well as specified output formats, for example:
> - Help me to locate <expr> in the image and give me its bounding boxes.
> - Please pinpoint the bounding box [[x1,y1,x2,y2], …] in the image as per the given description. <expr>
Here, `<expr>` is the description of the target object. The output bounding box is a quadruple $$[x_1,y_1,x_2,y_2]$$ composed of the coordinates of the top-left and bottom-right corners, where each value is normalized by the image width (for x) or height (for y) and scaled by 1000.
In the response, the special tokens `<|begin_of_box|>` and `<|end_of_box|>` are used to mark the image bounding box in the answer. The bracket style may vary ([], [[]], (), <>, etc.), but the meaning is the same: to enclose the coordinates of the box.
### GUI Agent Example
- `examples/gui-agent`: Demonstrates prompt construction and output handling for GUI Agents, including strategies for mobile, PC, and web. Prompt templates differ between GLM-4.1V and GLM-4.5V.
### Quick Demo Application
- `examples/vlm-helper`: A desktop assistant for GLM multimodal models (mainly GLM-4.5V, compatible with GLM-4.1V), supporting text, images, videos, PDFs, PPTs, and more. Connects to the GLM multimodal API for intelligent services across scenarios. Download the [installer](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App) or [build from source](https://github.com/zai-org/GLM-V/blob/main/examples/vlm-helper/README.md).
### vLLM
```bash
vllm serve zai-org/GLM-4.5V \
--tensor-parallel-size 4 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--served-model-name glm-4.5v \
--allowed-local-media-path / \
--media-io-kwargs '{"video": {"num_frames": -1}}'
```
### SGLang
```shell
python3 -m sglang.launch_server --model-path zai-org/GLM-4.5V \
--tp-size 4 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--served-model-name glm-4.5v \
--port 8000 \
--host 0.0.0.0
```
Notes:
- We recommend using the `FA3` attention backend in SGLang for higher inference performance and lower memory usage:
`--attention-backend fa3 --mm-attention-backend fa3 --enable-torch-compile`
Without `FA3`, large video inference may cause out-of-memory (OOM) errors.
We also recommend increasing `SGLANG_VLM_CACHE_SIZE_MB` (e.g., `1024`) to provide sufficient cache space for video understanding.
- When using `vLLM` and `SGLang`, thinking mode is enabled by default. To disable the thinking switch, add:
`extra_body={"chat_template_kwargs": {"enable_thinking": False}}`
## Model Fine-tuning
[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) already supports fine-tuning for GLM-4.5V & GLM-4.1V-9B-Thinking models. Below is an example of dataset construction using two images. You should organize your dataset into `finetune.json` in the following format, This is an example for fine-tuning GLM-4.1V-9B.
```json
[
{
"messages": [
{
"content": "<image>Who are they?",
"role": "user"
},
{
"content": "<think>
User asked me to observe the image and find the answer. I know they are Kane and Goretzka from Bayern Munich.</think>
<answer>They're Kane and Goretzka from Bayern Munich.</answer>",
"role": "assistant"
},
{
"content": "<image>What are they doing?",
"role": "user"
},
{
"content": "<think>
I need to observe what these people are doing. Oh, they are celebrating on the soccer field.</think>
<answer>They are celebrating on the soccer field.</answer>",
"role": "assistant"
}
],
"images": [
"mllm_demo_data/1.jpg",
"mllm_demo_data/2.jpg"
]
}
]
```
1. The content inside `<think> ... </think>` will **not** be stored as conversation history or in fine-tuning data.
2. The `<image>` tag will be replaced with the corresponding image information.
3. For the GLM-4.5V model, the <answer> and </answer> tags should be removed.
Then, you can fine-tune following the standard LLaMA-Factory procedure.
## Fixed and Remaining Issues
Since the release of GLM-4.1V, we have addressed many community-reported issues. In GLM-4.5V, common issues such as repetitive thinking and incorrect output formatting are alleviated. However, some limitations remain:
1. In frontend code reproduction cases, the model may output raw HTML without proper markdown wrapping. There may also be character escaping issues, potentially causing rendering errors. We provide a [patch](https://github.com/zai-org/GLM-V/blob/main/inference/html_detector.py) to fix most cases.
2. Pure text Q&A capabilities still have room for improvement, as this release focused primarily on multimodal scenarios.
3. In some cases, the model may overthink or repeat content, especially for complex prompts.
4. Occasionally, the model may restate the answer at the end.
5. There are some perception issues, with room for improvement in tasks such as counting and identifying specific individuals.
We welcome feedback in the issue section and will address problems as quickly as possible.
## Citation
If you use this model, please cite the following paper:
```bibtex
@misc{vteam2025glm45vglm41vthinkingversatilemultimodal,
title={GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning},
author={V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Bin Chen and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiale Zhu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingdao Liu and Mingde Xu and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Tianyu Tong and Wenkai Li and Wei Jia and Xiao Liu and Xiaohan Zhang and Xin Lyu and Xinyue Fan and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yanzi Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuting Wang and Yu Wang and Yuxuan Zhang and Zhao Xue and Zhenyu Hou and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
year={2025},
eprint={2507.01006},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.01006},
}
```
|
shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
|
shubhamprshr
| 2025-09-23T06:06:58Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T02:07:51Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/auto/runs/li3aqg9k)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.7.0
- Datasets: 4.1.1
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Rezaq234r3/banglaLLM_1.3b
|
Rezaq234r3
| 2025-09-23T06:06:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:06:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lien-an/gpt2_finetuned_medical
|
Lien-an
| 2025-09-23T06:04:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T06:04:37Z |
---
license: apache-2.0
---
|
WinnerXu777/Finetuned_Qwen3_for_Disease_Diagnosis
|
WinnerXu777
| 2025-09-23T06:03:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-4B-Base",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-4B-Base",
"region:us"
] |
text-generation
| 2025-09-23T04:57:20Z |
---
base_model: Qwen/Qwen3-4B-Base
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen3-4B-Base
- lora
- transformers
---
# Model Card for Model ID
<!-- finetuned Qwen3 4B 4bit for Disease Diagnosis -->
finetuned Qwen3 4B 4bit for Disease Diagnosis
## Model Details
### Model Description
<!-- the model was trained with medical reports and the disease labels to predict the disease labels from given medical reports -->
the model was trained with medical reports and the disease labels to predict the disease labels from given medical reports
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758607265
|
poolkiltzn
| 2025-09-23T06:02:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T06:02:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yushangru/uuu_fine_tune_gpt2
|
yushangru
| 2025-09-23T06:00:48Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:32Z |
---
license: apache-2.0
---
|
JasonHsu0704/uuu_fine_tune_gpt2
|
JasonHsu0704
| 2025-09-23T05:57:50Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:40:37Z |
---
license: apache-2.0
---
|
yuanlinwen/llama2_uuu_news_qlora
|
yuanlinwen
| 2025-09-23T05:57:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:57:04Z |
---
license: apache-2.0
---
|
katachang/uuu_fine_tune_gpt2
|
katachang
| 2025-09-23T05:57:02Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:22:34Z |
---
license: apache-2.0
---
|
Eskender/products-ranker-preprod-bge-v9_64_corrected_data
|
Eskender
| 2025-09-23T05:56:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T05:56:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CHIHAO-LIN/uuu_fine_tune_gpt2
|
CHIHAO-LIN
| 2025-09-23T05:53:49Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:47:00Z |
---
license: apache-2.0
---
|
elec1204/uuu_fine_tune_gpt2
|
elec1204
| 2025-09-23T05:52:52Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:35:02Z |
---
license: apache-2.0
---
|
ccwendy/uuu_fine_tune_gpt2
|
ccwendy
| 2025-09-23T05:52:40Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:33:07Z |
---
license: apache-2.0
---
|
arindamSRM15/autotrain-efate-i71g2
|
arindamSRM15
| 2025-09-23T05:51:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T05:50:33Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.5455595254898071
f1_macro: 0.3816326530612245
f1_micro: 0.5
f1_weighted: 0.3816326530612245
precision_macro: 0.3190476190476191
precision_micro: 0.5
precision_weighted: 0.3190476190476191
recall_macro: 0.5
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
chia-lung/tcp2023
|
chia-lung
| 2025-09-23T05:51:11Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:24:45Z |
---
license: apache-2.0
---
|
chengtaoyang/uuu_glora
|
chengtaoyang
| 2025-09-23T05:48:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:48:48Z |
---
license: apache-2.0
---
|
kb24ysh/uuu_fine_tune_taipower
|
kb24ysh
| 2025-09-23T05:47:52Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:47:15Z |
---
license: apache-2.0
---
|
Lien-an/uuu_fine_tune_taipower
|
Lien-an
| 2025-09-23T05:47:36Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:21:13Z |
---
license: apache-2.0
---
|
DavidLanz/uuu_fine_tune_gpt2
|
DavidLanz
| 2025-09-23T05:45:50Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt",
"en",
"license:gpl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-17T01:53:39Z |
---
license: gpl
model_name: GPT2
model_type: GPT2
language: en
pipeline_tag: text-generation
tags:
- pytorch
- gpt
- gpt2
---
# Fine-tuning GPT2 with energy plus medical dataset
Fine tuning pre-trained language models for text generation.
Pretrained model on Chinese language using a GPT2 for Large Language Head Model objective.
## Model description
transferlearning from DavidLanz/uuu_fine_tune_taipower and fine-tuning with medical dataset for the GPT-2 architecture.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import GPT2LMHeadModel, BertTokenizer, TextGenerationPipeline
>>> model_path = "DavidLanz/DavidLanz/uuu_fine_tune_gpt2"
>>> model = GPT2LMHeadModel.from_pretrained(model_path)
>>> tokenizer = BertTokenizer.from_pretrained(model_path)
>>> max_length = 200
>>> prompt = "歐洲能源政策"
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generated = text_generator(prompt, max_length=max_length, do_sample=True)
>>> print(text_generated[0]["generated_text"].replace(" ",""))
```
```python
>>> from transformers import GPT2LMHeadModel, BertTokenizer, TextGenerationPipeline
>>> model_path = "DavidLanz/DavidLanz/uuu_fine_tune_gpt2"
>>> model = GPT2LMHeadModel.from_pretrained(model_path)
>>> tokenizer = BertTokenizer.from_pretrained(model_path)
>>> max_length = 200
>>> prompt = "蕁麻疹過敏"
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generated = text_generator(prompt, max_length=max_length, do_sample=True)
>>> print(text_generated[0]["generated_text"].replace(" ",""))
```
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-4
|
vectorzhou
| 2025-09-23T05:44:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:28:51Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/6kinw4fn)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
cyburn/restore_photo_v4.1-lora
|
cyburn
| 2025-09-23T05:42:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"image-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-image
| 2025-09-23T05:41:01Z |
---
tags:
- image-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-Kontext-dev
license: creativeml-openrail-m
inference:
parameters:
width: 800
height: 1200
---
# restore_photo_v4.1-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](cyburn/restore_photo_v4.1-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-Kontext-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cyburn/restore_photo_v4.1-lora', weight_name='restore_photo_v4.1_000002000.safetensors')
image = pipeline('a beautiful landscape').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
William718/uuu_fine_tune_taipoweruuu_fine_tune_taipower
|
William718
| 2025-09-23T05:41:00Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:35:01Z |
---
license: apache-2.0
---
|
JasonHsu0704/llama2_uuu_news_qlora
|
JasonHsu0704
| 2025-09-23T05:40:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:40:49Z |
---
license: apache-2.0
---
|
EllenLin/llama2_uuu_news_qlora
|
EllenLin
| 2025-09-23T05:40:38Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:40:38Z |
---
license: apache-2.0
---
|
finfinder/uuu_fine_tune_taipower
|
finfinder
| 2025-09-23T05:38:25Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:08Z |
---
license: apache-2.0
---
|
CHIHAO-LIN/uuu_fine_tune_taipower
|
CHIHAO-LIN
| 2025-09-23T05:38:05Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:35:22Z |
---
license: apache-2.0
---
|
rickrock-art/nsfwmasterflux
|
rickrock-art
| 2025-09-23T05:37:16Z | 646 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-07-25T14:27:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Schermafbeelding 2025-07-23 om 22.40.14.png
text: Screenshot
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: NSFWMASTERFLUX
---
# NSFWMASTERFLUX
<Gallery />
## Model description
NSFWMASTERFLUX
## Trigger words
You should use `NSFWMASTERFLUX` to trigger the image generation.
## Download model
[Download](/rickrock-art/nsfwmasterflux/tree/main) them in the Files & versions tab.
|
sdagsadgd/blockassist
|
sdagsadgd
| 2025-09-23T05:36:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate squeaky salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T13:26:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate squeaky salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EllenLin/tcp2023
|
EllenLin
| 2025-09-23T05:35:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:35:17Z |
---
license: apache-2.0
---
|
21et/uuu_fine_tune_taipower
|
21et
| 2025-09-23T05:35:02Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:12Z |
---
license: apache-2.0
---
|
uujjdd/uuu_fine_tune_gpt2
|
uujjdd
| 2025-09-23T05:34:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:34:15Z |
---
license: apache-2.0
---
|
harry56183/uuu_fine_tune_taipower
|
harry56183
| 2025-09-23T05:34:12Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:15:55Z |
---
license: apache-2.0
---
|
keystats/whisper-swahili-finetuned
|
keystats
| 2025-09-23T05:33:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T04:39:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomal66/qwen2.5-1.5b-emotion-fpt-sft
|
tomal66
| 2025-09-23T05:33:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T05:32:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758605397
|
poolkiltzn
| 2025-09-23T05:31:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T05:30:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
junyoung-00/Phi-3.5-vision-instruct-ChartCap
|
junyoung-00
| 2025-09-23T05:24:34Z | 37 | 4 |
transformers
|
[
"transformers",
"safetensors",
"phi3_v",
"text-generation",
"chart-captioning",
"multimodal",
"vision-language-model",
"image-to-text",
"custom_code",
"arxiv:2508.03164",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2025-07-31T14:39:29Z |
---
license: mit
pipeline_tag: image-to-text
library_name: transformers
tags:
- chart-captioning
- multimodal
- vision-language-model
---
# ChartCap: Mitigating Hallucination of Dense Chart Captioning
This repository contains the model presented in the paper [**ChartCap: Mitigating Hallucination of Dense Chart Captioning**](https://huggingface.co/papers/2508.03164).
**Project Page:** [https://junyoung-00.github.io/ChartCap/](https://junyoung-00.github.io/ChartCap/)\
**Code:** (WIP) [https://github.com/junyoung-00/ChartCap](https://github.com/junyoung-00/ChartCap)
## Model Description
`Phi-3.5-vision-instruct-ChartCap` is a ChartCap-fine-tuned version of [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct).
The model aims to generate high-quality, dense captions for charts, ensuring that the generated text accurately captures structural elements and key insights discernible from the charts, while mitigating the inclusion of extraneous or hallucinated information.
## Required Packages
```bash
flash_attn==2.5.8
numpy==1.24.4
Pillow==10.3.0
Requests==2.31.0
torch==2.3.0
torchvision==0.18.0
transformers==4.43.0
accelerate==0.30.0
```
## How to Use
```python
from transformers import AutoProcessor, AutoModelForCausalLM
from PIL import Image
import requests
import torch
model_id = "junyoung-00/Phi-3.5-vision-instruct-ChartCap"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
# Load an example chart image (URL or local path)
image_url = "https://your-server.com/example_chart.png"
image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
# Define the prompt for dense chart captioning
prompt = "Please provide a detailed caption for the chart."
messages = [
{"role": "user", "content": f"<|image|>
{prompt}"}
]
# Apply chat template and prepare inputs
input_ids = processor.tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
# The image token handling for Phi3V can sometimes be specific, ensure correct placeholder handling if <|image|> is mapped.
# For simplicity, we use the standard processor input which handles image embedding.
inputs = processor(text=input_ids, images=image, return_tensors="pt").to(model.device)
# Generate response
generated_ids = model.generate(**inputs, max_new_tokens=512)
# Decode and print the output
response = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response.strip())
```
## Citation
If you find this model or the associated research helpful, please cite:
```bibtex
@inproceedings{lim2025chartcap,
title = {ChartCap: Mitigating Hallucination of Dense Chart Captioning},
author = {Junyoung Lim and Jaewoo Ahn and Gunhee Kim},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
year = {2025}
}
```
|
aamijar/llm-streamline-Llama-2-4.7B-lora-r8-sst2-epochs2
|
aamijar
| 2025-09-23T05:22:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T05:22:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuanlinwen/test
|
yuanlinwen
| 2025-09-23T05:21:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:21:32Z |
---
license: apache-2.0
---
|
yushangru/llama2_uuu_news_qlora
|
yushangru
| 2025-09-23T05:20:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:40Z |
---
license: apache-2.0
---
|
yushangru/tcp2023
|
yushangru
| 2025-09-23T05:20:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:10Z |
---
license: apache-2.0
---
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-4
|
vectorzhou
| 2025-09-23T05:18:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:07:44Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/09vdah42)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Lien-an/tcp2023
|
Lien-an
| 2025-09-23T05:18:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:18:20Z |
---
license: apache-2.0
---
|
tomal66/qwen2.5-1.5b-emotion-sft
|
tomal66
| 2025-09-23T05:17:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T05:17:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kb24ysh/tcp2023
|
kb24ysh
| 2025-09-23T05:17:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:17:31Z |
---
license: apache-2.0
---
|
tim3828/tcp2023
|
tim3828
| 2025-09-23T05:16:46Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:46Z |
---
license: apache-2.0
---
|
katachang/tcp2023
|
katachang
| 2025-09-23T05:16:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:06Z |
---
license: apache-2.0
---
|
f0857057/tcp2023
|
f0857057
| 2025-09-23T05:16:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:04Z |
---
license: apache-2.0
---
|
timwu520/tcp2023
|
timwu520
| 2025-09-23T05:15:44Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:15:44Z |
---
license: apache-2.0
---
|
samhitmantrala/smish_fin
|
samhitmantrala
| 2025-09-23T05:13:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T04:44:24Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smish_fin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smish_fin
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0658
- Accuracy: 0.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 442 | 0.0621 | 0.9870 |
| 0.069 | 2.0 | 884 | 0.0490 | 0.9921 |
| 0.0133 | 3.0 | 1326 | 0.0429 | 0.9932 |
| 0.0045 | 4.0 | 1768 | 0.0585 | 0.9926 |
| 0.0011 | 5.0 | 2210 | 0.0591 | 0.9926 |
| 0.0015 | 6.0 | 2652 | 0.0644 | 0.9926 |
| 0.0001 | 7.0 | 3094 | 0.0665 | 0.9926 |
| 0.0001 | 8.0 | 3536 | 0.0695 | 0.9921 |
| 0.0001 | 9.0 | 3978 | 0.0807 | 0.9915 |
| 0.0001 | 10.0 | 4420 | 0.0602 | 0.9938 |
| 0.0047 | 11.0 | 4862 | 0.0831 | 0.9915 |
| 0.0013 | 12.0 | 5304 | 0.0719 | 0.9921 |
| 0.0028 | 13.0 | 5746 | 0.0592 | 0.9932 |
| 0.0002 | 14.0 | 6188 | 0.0622 | 0.9938 |
| 0.0 | 15.0 | 6630 | 0.0635 | 0.9943 |
| 0.0 | 16.0 | 7072 | 0.0646 | 0.9943 |
| 0.0 | 17.0 | 7514 | 0.0655 | 0.9943 |
| 0.0 | 18.0 | 7956 | 0.0664 | 0.9943 |
| 0.0 | 19.0 | 8398 | 0.0615 | 0.9943 |
| 0.0017 | 20.0 | 8840 | 0.0625 | 0.9943 |
| 0.0 | 21.0 | 9282 | 0.0634 | 0.9943 |
| 0.0 | 22.0 | 9724 | 0.0640 | 0.9943 |
| 0.0 | 23.0 | 10166 | 0.0653 | 0.9943 |
| 0.0 | 24.0 | 10608 | 0.0656 | 0.9943 |
| 0.0 | 25.0 | 11050 | 0.0658 | 0.9943 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Alicia22/23SAT_KY10_l11
|
Alicia22
| 2025-09-23T05:11:47Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T05:05:25Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
GANGodfather/Affine-5HMkezj9CE9X1LpCybtCfvRUNYP1e3x8PBnG4WF1yBr64n8N
|
GANGodfather
| 2025-09-23T05:07:56Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"8-bit",
"mxfp4",
"region:us"
] | null | 2025-09-22T13:23:01Z |
GANGodfather/Affine-5HMkezj9CE9X1LpCybtCfvRUNYP1e3x8PBnG4WF1yBr64n8N
|
GANGodfather/Affine-PESR01
|
GANGodfather
| 2025-09-23T05:07:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"arxiv:2508.10925",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-22T12:55:47Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-120b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-120b
ollama pull gpt-oss:120b
ollama run gpt-oss:120b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-120b
lms get openai/gpt-oss-120b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-120b
huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
# Citation
```bibtex
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
```
README
|
thewh1teagle/whisper-heb-ipa-large-v3-turbo-ct2
|
thewh1teagle
| 2025-09-23T05:05:03Z | 0 | 1 | null |
[
"he",
"region:us"
] | null | 2025-09-23T04:56:48Z |
---
language:
- he
---
Ctranslate2 version of Fine-tuned Whisper small that transcribe Hebrew into IPA
For training and inference code, See https://github.com/thewh1teagle/whisper-heb-ipa
For original model weights, See https://huggingface.co/thewh1teagle/whisper-heb-ipa
This project is part of Phonikud project. See https://phonikud.github.io
|
mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF
|
mradermacher
| 2025-09-23T05:00:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"mixture-of-experts",
"compression",
"top-k-reduction",
"qwen3",
"30b",
"en",
"base_model:kyne0127/Qwen3-30B-A3B-TopK4-Compressed",
"base_model:quantized:kyne0127/Qwen3-30B-A3B-TopK4-Compressed",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T20:25:53Z |
---
base_model: kyne0127/Qwen3-30B-A3B-TopK4-Compressed
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
- mixture-of-experts
- compression
- top-k-reduction
- qwen3
- 30b
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/kyne0127/Qwen3-30B-A3B-TopK4-Compressed
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.imatrix.gguf) | imatrix | 0.2 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ1_S.gguf) | i1-IQ1_S | 6.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ1_M.gguf) | i1-IQ1_M | 7.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_S.gguf) | i1-IQ2_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_M.gguf) | i1-IQ2_M | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q2_K_S.gguf) | i1-Q2_K_S | 10.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
caphe/paa15
|
caphe
| 2025-09-23T05:00:00Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T04:57:38Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
reaperdoesntknow/MoA-150M
|
reaperdoesntknow
| 2025-09-23T04:51:44Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"moa_metric",
"trl",
"sft",
"text-generation",
"conversational",
"en",
"dataset:WeMake/Intelligent-Content-Understanding",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:HuggingFaceH4/MATH-500",
"dataset:zai-org/LongWriter-6k",
"dataset:m-a-p/DeepWriting-20K",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T07:31:06Z |
---
library_name: transformers
license: apache-2.0
datasets:
- WeMake/Intelligent-Content-Understanding
- QingyiSi/Alpaca-CoT
- HuggingFaceH4/MATH-500
- zai-org/LongWriter-6k
- m-a-p/DeepWriting-20K
language:
- en
pipeline_tag: text-generation
tags:
- trl
- sft
---
# MoA-Metric-LM-150M (Convergent)
A compact-but-capable ≈150M parameter causal LM that replaces dot-product attention with metric-native attention and augments sequence geometry with BlackHoleRoPE (a learnable, stable RoPE variant). Designed to train and run on modest hardware (CPU-first friendly) while staying fully compatible with 🤗
---
# Why this model?
• Distance scores, not dot products. Heads score with L2, cosine, or diag-Mahalanobis distances. This gives direct control over geometry, often stabilizes training, and can be more sample-efficient.
• BlackHoleRoPE positional encoding.
• Q/K: pure unit-modulus rotation (unitary → numerically stable).
• V: bounded-energy gating (Penrose-inspired), optionally modulated by a discrepancy signal.
• Parameters synthesized from a tiny Fourier basis → extrapolable and cache-friendly, with low memory.
• MoA (Mixture-of-Architectures) block. Token-wise router softly blends four heads per block:
1. LocalConv (depthwise token-local conv)
2. MetricMHAttention (multi-head metric attention)
3. ChannelMix (MLP)
4. MetricMQA (multi-query, shared K/V)
• Triangle-Inequality (TI) regularizer. Keeps metric heads honest by penalizing violations over random triples.
• Runs on CPUs. Implemented to behave well in FP32 on AVX2/AVX-512 machines.
⸻
## Model at a glance
Property Value
Parameters ~150 M (exact count depends on vocab; see config.json)
Layers 12–24 depending on variant (MoA blocks)
Hidden size ≥ 1024 in the 400 M variant (head dim divisible by #heads)
Attention Metric-native (L2 / cosine / diag-Mahalanobis), plus MetricMQA
Positional BlackHoleRoPE per-head (rope_global for MH-Attn, rope_mqa for MQA)
Router Token-wise soft mixture across the four heads (+ optional bias gate)
FFN HyperFFN = SwiGLU MLP + SepConv1d + Low-Rank path (router-mixed)
Context Trained primarily at 512–1024 tokens; config allows up to 2048
Precision Training FP32 (CPU-friendly); inference FP32/BF16/FP16 supported
License Apache-2.0
Note on context: training emphasized 512–1024; BlackHoleRoPE is extrapolable, but throughput and quality beyond training lengths depend on your hardware and data.
⸻
Intended use & limitations
Intended: compact assistants, long-context reading/QA, math-style step reasoning, research on distance-based attention and geometric inductive biases.
Not intended: safety-critical use, heavy factual QA at web scale, or domains requiring guaranteed accuracy. Evaluate carefully before deployment.
⸻
## Datasets
- WeMake/Intelligent-Content-Understanding ~256k Tokens, [8, 256] [4, 512]
- QingyiSi/Alpaca-CoT ~128K Tokens [2, 1024], [1, 2048] [4, 512]
- HuggingFaceH4/MATH-500 ~256k Tokens, [8, 256] [4, 512]
- zai-org/LongWriter-6k ~128k Tokens [2, 1024] [1, 2048]
- SFT: prithivMLmods/Deepthink-Reasoning [8, 256] ~ Final Loss 0.3200/ Total Tokens 128512.0
Training used modest token budgets (hundreds of thousands). Reported training logs showed healthy loss descent on both 512 and 1024 sequence lengths on CPU runs. Exact metrics will vary with tokenizer, preprocessing, and optimizer settings.
⸻
```python
Installation
pip install transformers accelerate sentencepiece
⸻
Quick start
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo = "reaperdoesntknow/MoA-150M"
tok = AutoTokenizer.from_pretrained(repo)
model = AutoModelForCausalLM.from_pretrained(
repo, torch_dtype=torch.float32, device_map="cpu"
).eval()
prompt = "Read and answer: If 3x + 2 = 17, what is x?\nReasoning:"
inputs = tok(prompt, return_tensors="pt")
with torch.no_grad():
out = model.generate(
**inputs,
max_length=256,
do_sample=True,
top_p=0.9,
temperature=0.8,
pad_token_id=tok.eos_token_id,
)
print(tok.decode(out[0], skip_special_tokens=True))
Pipeline usage
from transformers import pipeline
repo = "reaperdoesntknow/MoA-400M"
pipe = pipeline("text-generation", model=repo, device_map="cpu")
print(
pipe(
"Question: Who wrote 'The Selfish Gene'?\nAnswer:",
max_length=128,
do_sample=False,
)[0]["generated_text"]
)
```
⸻
## Architecture details
Metric attention (MH)
• Scores:
• L2: -||q-k||² / sqrt(d)
• Cosine: normalized dot → scaled
• diag-Mahalanobis: per-head diagonal scale on dimensions
• Stability: logits scaled by a learnable α; optional radius-based pruning mask for efficiency.
• Value path: post-attention Up/Down projector (gated) for expressive value mixing.
Metric MQA (shared K/V)
• K and V are shared (single projection) and broadcast; queries remain multi-head. Useful for throughput and memory.
## BlackHoleRoPE
• Q/K rotation only (unit modulus) → preserves norms; avoids value blow-ups.
• V receives bounded-energy amplification (energy_min..energy_max) with optional discrepancy modulation.
• Parameters synthesized from a small Fourier basis; reduces cache size and improves length generalization.
Routing & gates
• TokenRouter: per-token weights over {LocalConv, MetricMH, ChannelMix, MetricMQA}.
• Feature gates: per-head multiplicative scales in (0, 2) around 1.0.
• Optional router bias adds signed offsets before softmax.
Triangle-Inequality regularizer
• Lightweight penalty on random triples to discourage degenerate metric geometry.
⸻
Training recipe (reference)
• Device: CPU (AVX2/AVX-512 recommended).
• Precision: FP32.
• Optimizer: AdamW or Adam (β₁=0.9, β₂=0.95–0.999 work); cosine LR or linear warmup.
• Batch/seq: [batch, seq] = [2–4, 512–1024].
• Regularization: modest dropout in attention/value paths; optional TI penalty.
If you see NaN/Inf during sampling, ensure masks are additive 0/-inf, clamp logits when rows are fully masked, and set a pad_token_id in .generate().
⸻
Evaluation notes
The model targets behavioral quality per FLOP rather than leaderboard chasing. On held-out long-context QA and small math checks, it shows:
• Robust token-to-token coherence at 512–1024.
• Stable generation on CPU with FP32.
• Competitive loss trends versus dot-product baselines trained under the same compute.
Please share issues/benchmarks via the repo so results can be tracked.
⸻
How to fine-tune
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer
from datasets import load_dataset
repo = "reaperdoesntknow/MoA-150M"
tok = AutoTokenizer.from_pretrained(repo)
model = AutoModelForCausalLM.from_pretrained(repo)
ds = load_dataset("yzhuang/Agentic-Long-Context-Understanding-QA", split="train[:2%]")
def tok_fn(ex):
x = tok(
ex["question"] + "\n" + ex["context"] + "\nAnswer:",
truncation=True,
max_length=512,
)
x["labels"] = x["input_ids"].copy()
return x
tds = ds.map(tok_fn, remove_columns=ds.column_names)
args = TrainingArguments(
output_dir="./moa400m-finetune",
per_device_train_batch_size=2,
gradient_accumulation_steps=1,
num_train_epochs=1,
learning_rate=5e-4,
weight_decay=0.0,
warmup_steps=100,
logging_steps=10,
save_steps=200,
fp16=False,
bf16=False,
)
trainer = Trainer(model=model, args=args, train_dataset=tds)
trainer.train()
```
Known behaviors / tips
• Context > 1024: works, but CPU throughput drops; BlackHoleRoPE helps stability, not throughput.
• Sampling: always pass pad_token_id (often eos_token_id) to .generate(); avoid temperature > 1.2 on small models.
• KV cache: supported; for CPU you may prefer smaller beams and greedy/small-temperature sampling.
---
Safety & responsibility
This is a research model. It was trained on public datasets and may produce incorrect or biased content. Do not rely on it for advice or sensitive decisions.
---
Citation
@software{moa_metric_lm_400m,
title = {MoA-Metric-LM-400M: Distance-based attention with BlackHoleRoPE},
author = {reaperdoesntknow},
year = {2025},
url = {https://huggingface.co/reaperdoesntknow/MoA-400M}
}
---
Acknowledgements
Built with 🤗 Transformers and a metric-first rethinking of attention. BlackHoleRoPE draws inspiration from symplectic/rotational encodings and bounded-energy dynamics.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.