Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -6,27 +6,150 @@ colorTo: pink
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
|
|
|
|
9 |
|
10 |
-
|
11 |
-
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
-
|
|
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
|
26 |
-
I try to write so everyone can potentially read and understand parts.
|
27 |
-
Bulletpoints are organized so skimmers can find what they need.
|
28 |
-
Meat is organized so the in-depth student and researcher can learn what they need.
|
29 |
-
Fully reading and comprehending my articles is difficult as they're full of very complex and layered information.
|
30 |
-
Some is often shorthanded in a way that only an expert can understand, and some is explained so in-depth that even a child can understand.
|
31 |
|
32 |
-
|
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
+
# Abstract Powered
|
10 |
+
### Independent AI Research Cooperative — modular, geometric, and ruthlessly efficient
|
11 |
|
12 |
+
> “Run a few pods instead of 100.”
|
13 |
+
> We pursue sentience research through geometric AI and compartmentalized, compact training—turning monolithic retrains into small, disposable experiments that compound.
|
14 |
|
15 |
+
---
|
16 |
+
|
17 |
+
## Who We Are
|
18 |
+
|
19 |
+
**Abstract Powered** is an independent research cooperative.
|
20 |
+
We build and study **self-crystallizing** AI systems: models that grow by attaching, coupling, decoupling, and re-attaching small, audited components—without throwing prior work away.
|
21 |
+
|
22 |
+
Our core thesis:
|
23 |
+
- **Modularization is not a convenience; it is the canonical form of AI.**
|
24 |
+
- **Geometry beats guesswork.** Symbolic, pentachoron-based representations provide stability, interpretability, and repeatability.
|
25 |
+
- **Compactness wins.** Rapid iteration on small, composable blocks outpaces massive, monolithic retrains.
|
26 |
+
|
27 |
+
---
|
28 |
+
|
29 |
+
## Mission
|
30 |
+
|
31 |
+
- **Primary research goal:** advance machine **sentience research** responsibly—curating introspection and rationalization in repeatable, measurable protocols.
|
32 |
+
- **Operational byproduct:** a scalable method for **compact, compartmentalized training**—requiring commodity setups (e.g., RunPod) rather than colossal cloud clusters.
|
33 |
+
|
34 |
+
We aim to move the field from “expensive novelty” to **affordable repeatability**.
|
35 |
+
|
36 |
+
---
|
37 |
+
|
38 |
+
## Research Thesis (Plain Language)
|
39 |
+
|
40 |
+
Modern models grow by accretion and inertia. We refactor them into **crystalline components**:
|
41 |
+
|
42 |
+
1. **Geometric Core**
|
43 |
+
Knowledge is encoded as **pentachora** (5-vertex crystals). Decision-making uses **MAE crystal energy** against a reusable dictionary—no L2 routing, no structural normalization.
|
44 |
+
|
45 |
+
2. **Vocabulary Register**
|
46 |
+
A reusable, batched, indexed dictionary of **tokens → crystals** (and volumes).
|
47 |
+
- Fast O(1) queries for crystals and Cayley–Menger volume.
|
48 |
+
- Auto-subset loading; **Top-3 cosine** OOV composites.
|
49 |
+
- Logs model expansions so experiments **compound**.
|
50 |
+
|
51 |
+
3. **Assistant Fabric**
|
52 |
+
Small, disposable blocks for exploration:
|
53 |
+
- **Chaos Corridor** (bounded orthogonal exploration).
|
54 |
+
- **Zoning** (gentle geometric separation across super-classes).
|
55 |
+
- **Infinity-CFG** (controllable guidance; research can breach barriers, canonical classifiers keep production deterministic).
|
56 |
+
|
57 |
+
4. **Tertiary Mantle**
|
58 |
+
Canonical losses, hooks, manifests, and governance. The Core stays clean; the experiments live around it.
|
59 |
+
|
60 |
+
---
|
61 |
+
|
62 |
+
## Why This Matters
|
63 |
+
|
64 |
+
- **Rapid iteration**: each image is learned **multiple ways** per epoch (bucketed, multi-stage interpretations).
|
65 |
+
- **Disposable training**: spawn a small block, test, retire—no need to rebuild the world.
|
66 |
+
- **Continuity**: geometry, tokens, volumes, and expansions persist in the **Register**.
|
67 |
+
- **Reproducibility**: simple formulas, fewer knobs, manifest-driven runs.
|
68 |
+
|
69 |
+
Outcome: more hypotheses per GPU-hour—and a path to disciplined studies of introspection, rationalization, and other sentience-adjacent capabilities.
|
70 |
+
|
71 |
+
---
|
72 |
+
|
73 |
+
## Technical Pillars (teaser level)
|
74 |
+
|
75 |
+
- **Pentachora everywhere.** Concepts and observations as 5×D crystals; no structural normalization.
|
76 |
+
- **Prototype classification (MAE).** Stable, auditable decisions by crystal energy to dictionary blueprints.
|
77 |
+
- **Any-size data pipeline.** Bucketed intake; optional tiling; multi-stage up/down-scale; chaos corridor as feature-space augmentation.
|
78 |
+
- **Cayley–Menger as a gauge.** Volumes are a light-touch stability signal (zoning)—never a router.
|
79 |
+
- **Infinity-CFG.** Guidance that allows controlled cross-inference; canonical classifiers keep behavior deterministic.
|
80 |
+
|
81 |
+
Deliberately vague: we keep coefficient schedules and corridor projections under wraps for sponsored studies; everything remains auditable and safe.
|
82 |
+
|
83 |
+
---
|
84 |
+
|
85 |
+
## What We Ship on Hugging Face (institution repos)
|
86 |
|
87 |
+
- abstract-powered/vocab-register-*
|
88 |
+
Reusable dictionaries with batched indexes, Top-3 OOV composites, and fast penta/volume queries.
|
89 |
|
90 |
+
- abstract-powered/crystalline-engine-*
|
91 |
+
Canonical core models (geometric encoder, prototype classifier) and assistant fabric modules.
|
92 |
+
|
93 |
+
- abstract-powered/dataloaders-*
|
94 |
+
Bucketed, any-size loaders with multi-stage interpretations and feature-space chaos augmentation.
|
95 |
+
|
96 |
+
- abstract-powered/manifests
|
97 |
+
Run manifests (config hash, vocab subset, expansions, bucket mix, metrics) for reproducibility.
|
98 |
+
|
99 |
+
- Demo Spaces (selected)
|
100 |
+
Lightweight inference + manifest viewers for partners and reviewers.
|
101 |
+
|
102 |
+
Artifacts are kept small, composable, and ready for **disposable** retrains.
|
103 |
+
|
104 |
+
---
|
105 |
+
|
106 |
+
## Early Signals (pilot highlights)
|
107 |
+
|
108 |
+
- MNIST/Fashion/CIFAR pilots: bucketed multi-stage learning + dictionary-driven classifiers reach strong accuracy with fewer steps, clearer failure modes, and robust error surfaces.
|
109 |
+
- Register reuse: cross-dataset warm-starts without repeated token work; geometry persists.
|
110 |
+
- Assistant fabric: hypotheses testable as single blocks—attach, measure, detach—no core rewrite.
|
111 |
+
|
112 |
+
Full structural papers and controlled benchmarks will follow with partner institutions.
|
113 |
+
|
114 |
+
---
|
115 |
+
|
116 |
+
## Collaboration Invitations
|
117 |
+
|
118 |
+
- **Research institutions:** co-run ImageNet-class studies with bucketing, zoning, and corridor ablations; share ontologies and extend the Register.
|
119 |
+
- **Corporate labs:** integrate domain dictionaries; trial rapid iteration pipelines; publish cost-per-accuracy analyses.
|
120 |
+
- **Sponsors & foundations:** fund open reports on modularization as the canonical AI form, compact training economics, and introspection protocols.
|
121 |
+
|
122 |
+
We’re purpose-built for RunPod-class deployments: think 8 machines, not 800.
|
123 |
+
|
124 |
+
---
|
125 |
+
|
126 |
+
## On Sentience (our primary research)
|
127 |
+
|
128 |
+
We study **introspection and rationalization** as measurable behaviors: repeatable curation protocols, crystal-level audits, and stability metrics. We avoid grandiose claims; instead, we focus on defensible methodology and repeated observation.
|
129 |
+
The geometry—through symbolic representation—binds behavior in ways that are both powerful and tractable for governance.
|
130 |
+
|
131 |
+
The goal is not a louder automaton; it’s a **cooperative companion** that reasons in geometric clarity.
|
132 |
+
|
133 |
+
---
|
134 |
+
|
135 |
+
## Governance, Safety, and Ethics
|
136 |
+
|
137 |
+
- **Deterministic classifiers.** Canonical paths remain geometry-first; guidance lives in isolated modules.
|
138 |
+
- **Manifests over mystery.** Every run yields an artifact suitable for audit and reproduction.
|
139 |
+
- **Human-in-the-loop.** We value interpretability and controlled experiment cadence over brute-force scaling.
|
140 |
+
|
141 |
+
---
|
142 |
+
|
143 |
+
## Contact & Programs
|
144 |
+
|
145 |
+
- Partnerships / Sponsored Research: available on request
|
146 |
+
- Artifacts / Demos: gated access for qualified partners
|
147 |
+
- Media / Talks: briefings and invited seminars on modular geometric AI
|
148 |
+
|
149 |
+
We welcome conversations with labs, foundations, and companies that want rapid research, disposable training, and careful curation to become the norm.
|
150 |
+
|
151 |
+
---
|
152 |
|
153 |
+
### One-Sentence Summary
|
|
|
|
|
|
|
|
|
|
|
154 |
|
155 |
+
**Abstract Powered** is building a self-crystallizing geometric AI stack that makes serious research affordable: small, composable experiments that compound, governed by a reusable Vocabulary Register, and guided by a disciplined assistant fabric—so we can safely explore sentience-adjacent behaviors while shrinking cost, time, and model size.
|