bweng commited on
Commit
4a630f0
·
verified ·
1 Parent(s): d3a50ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,8 +9,8 @@ thumbnail: >-
9
  https://cdn-uploads.huggingface.co/production/uploads/6722a3d5150ed6c830d8f0cd/8Npu9X2Ilmo7-0A5Bg204.png
10
  ---
11
 
12
- We're a team that was trying to build a local-first consumer AI app but quickly came to the realization that the hardware and software haven't caught up yet. Running near-realtime workloads on CPU/GPU for a consumer app just isn't feasible. It's slow and it drains your battery. There are some solutions out there for local AI models on AI accelerators, but most were closed source or partially open, and that was frustrating.
13
 
14
- Instead of waiting for others, we decided to get into this ourselves and share the models + SDKs with everyone. We believe that most inference will be run locally in the near future and want to accelerate that by committing to be fully open source.
15
 
16
  Join our [discord](https://discord.gg/8FbwRaDFJR) or visit our [Github](https://github.com/FluidInference)
 
9
  https://cdn-uploads.huggingface.co/production/uploads/6722a3d5150ed6c830d8f0cd/8Npu9X2Ilmo7-0A5Bg204.png
10
  ---
11
 
12
+ We're a team that set out to build a local-first consumer AI app across Apple, Windows and Android, but after thousands of users and 6 months, we realized the hardware and software aren't there yet. Running near-realtime workloads on consumer CPUs and GPUs isn't feasible yet; it's too slow and drains battery life.
13
 
14
+ While some solutions exist for running local AI models on AI accelerators, most are closed source or only partially open, which we found frustrating. Rather than wait for others to solve this problem, we decided to tackle it ourselves and share our models and SDKs with everyone.
15
 
16
  Join our [discord](https://discord.gg/8FbwRaDFJR) or visit our [Github](https://github.com/FluidInference)