setfit-model-2 / README.md
davidadamczyk's picture
Add SetFit model
4fd53cc verified
metadata
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
  - accuracy
pipeline_tag: text-classification
tags:
  - setfit
  - sentence-transformers
  - text-classification
  - generated_from_setfit_trainer
widget:
  - text: >
      There is, of course, much to digest. I hope that these rubes and those who
      incited them are locked up, along with the fake electors and their
      advisors, and those who conspired to convince elections officials to
      violate the law, and finally, those who have and continue to threaten true
      Americans just doing their constitution-based jobs. One thing jumps out.
      Judge McFadden, who seems willing to demand that the government prove its
      case beyond a reasonable doubt, also seems to be willing to sentence
      convicted lawbreakers to serious time. That he acquitted the guy who
      claimed the police let him gives me confidence that these are not sham
      trials.The thing that I haven’t heard much about are the firings, trials,
      convictions, and sentences of those LEOs who aided and abetted the
      traitors. That would include the cops who let Mr. Martin enter the
      Capitol, and those on Trump’s secret service detail who may have been
      aiding Trump’s efforts to foment a riot.
  - text: >
      Both Vladimir Putin and Yevgeny Prigozhin are international war
      criminals.Both also undermined US elections in favor of Trump.<a
      href="https://www.reuters.com/world/us/russias-prigozhin-admits-interfering-us-elections-2022-11-07"
      target="_blank">https://www.reuters.com/world/us/russias-prigozhin-admits-interfering-us-elections-2022-11-07</a>/
  - text: >
      Aaron 100 percent. citizens united was a huge win for Russian citizen Vlad
      and Chinese citizen Xi.
  - text: >
      George Corsetti “Russia did NOT interfere in the 2016 election.”Sorry
      George, this is not true. Read the Russia report, it details more than a
      dozen felonies committed by TFG and his family and Campaign personnel
      during the 2015/16 Campaign along with evidence of Russian hackers and
      agents directly interfering in the 2016 election.
  - text: >
      Ms.Renkl does a nice job here, yet only hints at the decimation to public
      schools, libraries, governance, and healthcare by Bill Lee and the Red
      Legislators .Tennessee has a $50 B per year budget, $25B 0f this comes
      from federal government. It is a  wealthy state ranking in the top 16
      economically and 3rd in fiscal stability ( USNews).The stability comes
      from the egregious, wrongheaded use of federal monies earmarked for public
      schools and healthcare,Governor controls all Federal  school and
      healthcare dollars rather than decimating to citizens. The US tax payer is
      subsidizing this state as the Governor and legislators deny ACA low cost
      insurance to WORKING poor and the Governor used for unrelated purposes. .
      Federal public school monies are used to subsidize private schools and
      Lee’s pet project:private DeVos/Hillsdale religious charter schools. US
      tax payers should be made aware of the mishandling of our tax dollars in
      support of the ultra conservative regime.
inference: true
model-index:
  - name: SetFit with sentence-transformers/all-mpnet-base-v2
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: Unknown
          type: unknown
          split: test
        metrics:
          - type: accuracy
            value: 0.8
            name: Accuracy

SetFit with sentence-transformers/all-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
yes
  • 'Ken The FBI and DOJ should open an investigation into Russian interference in the 2022 election.\n'
  • "But you still haven't mentioned the crucial upcoming elections in Czechia, which cold alter the balance in Eastern/Central Europe.\n"
  • 'factsonly She won the 2022 election. She beat at least one Dem primary opponent and beat her Republican opponent by a decent margin in the general election.\n'
no
  • "Sean Who needs a source when you have Trump's well documented relationship with Putin?\n"
  • 'After a years-long crime spree by Donald Trump, his children, and his accomplices, we're still waiting for indictments. Why? Why is this so hard? The man who said, "Russia, if you're listening..." has openly and loudly ignored the law, the constitution, precedent, tradition, common decency and common sense for years, and yet we're still waiting for some part of his manifold misdeeds to land him in the docket. Again, why? Why?! There is so much evidence against him, it is impossible to see why he hasn't been arrested and charged for sedition, insurrection, money laundering, violating the Espionage Act, the Presidential Records Act, payoffs to hide his adulterous affairs, and other crimes up to and including attempting to mastermind a coup. There is no Witch Hunt. There's a just an inexplicably as-yet unindicted multiple felon who continues to grift dollars out of his hoodwinked followers.I am beginning to wonder if the DOJ has forgotten what upholding the law means, or if it is just the person who runs the DOJ.Donald Trump is not the only person to have questions that need to be answered: so does Merrick Garland -- and foremost amongst them is, 'What's the hold up?'\n'
  • "Most writers just imitate what they've read. They repeat formulas and replicate familiar sentence structures. Most TV could be written by ChatGPT. So it seems like ChatGPT writes pretty much like 90 percent of writers in a creative writing class. And 90 percent of readers don't want writing that pushes creative limits—look at the success of Colleen Hoover. I'd don't see why something like ChatGPT couldn't write her books. I don't mean that to be insulting—I do doubt an AI book would touch hearts as hers apparently do because it would lack her ineffable humanity. But even if an AI novel became a popular success, it wouldn't mean that AI had bested Nabokov or Woolf or DFW or … well, it's a very large list, and I'm not even claiming these as anything more than the first three whose names came to mind.(And in answer to Elon, sure, if I had to choose, I guess I'd rather live under the rule of Marcus Aurelius than Caligula's. But in fact I wouldn't get a vote on that, and I'd rather not live under an emperor at all.)\n"

Evaluation

Metrics

Label Accuracy
all 0.8

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("davidadamczyk/setfit-model-2")
# Run inference
preds = model("Aaron 100 percent. citizens united was a huge win for Russian citizen Vlad and Chinese citizen Xi.
")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 6 80.325 276
Label Training Sample Count
no 18
yes 22

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 120
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0017 1 0.4496 -
0.0833 50 0.1797 -
0.1667 100 0.0034 -
0.25 150 0.0003 -
0.3333 200 0.0002 -
0.4167 250 0.0002 -
0.5 300 0.0001 -
0.5833 350 0.0001 -
0.6667 400 0.0001 -
0.75 450 0.0001 -
0.8333 500 0.0001 -
0.9167 550 0.0001 -
1.0 600 0.0001 -

Framework Versions

  • Python: 3.10.13
  • SetFit: 1.1.0
  • Sentence Transformers: 3.0.1
  • Transformers: 4.45.2
  • PyTorch: 2.4.0+cu124
  • Datasets: 2.21.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}