julep-x-kupid

company

AI & ML interests

None defined yet.

julep-x-kupid's activity

diwank 
posted an update 8 days ago
view post
Post
1253
Excited to announce *Open Responses* – a self-hosted alternative to OpenAI's new _Responses API_ that you can run locally, and use with ANY LLM model / provider and not just with OpenAI Responses API. What's more is that this is also compatible with their agents-sdk so everything just works out of the box!

To try it out, just run npx -y open-responses init (or uvx) and that's it! :)

Would love feedback and support for adding local HF models, @akhaliq @bartowski @prithivMLmods @julien-c @clefourrier @philschmid

We’d love feedback from the Hugging Face community on how it integrates with your pipelines (support for Hugging Face models landing soon!). Let’s push open-source AI forward together!

Docs:
https://docs.julep.ai/responses/quickstart

Repo:
https://github.com/julep-ai/open-responses

agents-sdk:
https://platform.openai.com/docs/guides/agents
  • 1 reply
·
diwank 
posted an update 10 months ago
view post
Post
2271
Just published "CryptGPT: A Simple Approach to Privacy-Preserving Language Models Using the Vigenere Cipher".

https://huggingface.co/blog/diwank/cryptgpt-part1

tl;dr - we pretrained a gpt-2 tokenizer and model from scratch on a dataset encrypted with Vigenere cipher and it performs as well as regular gpt-2. Except in order to use it, you need to know the encryption key.

links:
https://github.com/creatorrr/cryptgpt
diwank/cryptgpt
diwank/cryptgpt-large
  • 2 replies
·
diwank 
posted an update 11 months ago
view post
Post
1721
Really excited to read about Kolmogorov Arnold Networks as a novel alternatives to Multi Layer Perceptrons.

Excerpt:
> Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability.

https://github.com/KindXiaoming/pykan
  • 1 reply
·