DeepGHS

community
Verified

AI & ML interests

Computer Vision Technology and Data Collection for Anime Waifu

Recent Activity

narugoΒ  updated a model 43 minutes ago
deepghs/yolo-face
narugoΒ  updated a model 43 minutes ago
deepghs/yolos
View all activity

deepghs's activity

not-lainΒ 
posted an update 1 day ago
TonicΒ 
posted an update 3 days ago
view post
Post
2113
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

our team made a game during the @mistral-game-jam and we're trying to win the community award !

try our game out and drop us a ❀️ like basically to vote for us !

Mistral-AI-Game-Jam/TextToSurvive

hope you like it !
Delta-VectorΒ 
posted an update 4 days ago
not-lainΒ 
posted an update 14 days ago
view post
Post
1193
we now have more than 2000 public AI models using ModelHubMixinπŸ€—
TonicΒ 
posted an update 15 days ago
view post
Post
1506
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

Facebook AI just released JASCO models that make music stems .

you can try it out here : Tonic/audiocraft

hope you like it
TonicΒ 
posted an update 17 days ago
view post
Post
2402
πŸ™‹πŸ»β€β™‚οΈHey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it πŸš€
not-lainΒ 
posted an update 19 days ago
view post
Post
3930
Published a new blogpost πŸ“–
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
πŸ”— https://huggingface.co/blog/not-lain/tensor-dims
some interesting takeaways :
TonicΒ 
posted an update 22 days ago
view post
Post
1677
microsoft just released Phi-4 , check it out here : Tonic/Phi-4

hope you like it :-)
DamarJatiΒ 
posted an update about 1 month ago
view post
Post
2544
Happy New Year 2025 πŸ€—
For the Huggingface community.
eienmojikiΒ 
posted an update about 2 months ago
view post
Post
1496
πŸ‘€ Introducing 2048 Game API: A RESTful API for the Classic Puzzle Game 🧩

I'm excited to share my latest project, 2048 Game API, a RESTful API that allows you to create, manage, and play games of 2048, a popular puzzle game where players slide numbered tiles to combine them and reach the goal of getting a tile with the value of 2048.

⭐ Features
Create new games with customizable board sizes (3-8)
Make moves (up, down, left, right) and get the updated game state
Get the current game state, including the board, score, and game over status
Delete games
Generate images of the game board with customizable themes (light and dark)

πŸ”— API Endpoints
POST /api/games - Create a new game
GET /api/games/:gameId - Get the current game state
POST /api/games/:gameId/move - Make a move (up, down, left, right)
DELETE /api/games/:gameId - Delete a game
GET /api/games/:gameId/image - Generate an image of the game board

🧩 Example Use Cases
- Create a new game with a 4x4 board:
curl -X POST -H "Content-Type: application/json" -d '{"size": 4}' http://localhost:3000/api/games

- Make a move up:
curl -X POST -H "Content-Type: application/json" -d '{"direction": "up"}' http://localhost:3000/api/games/:gameId/move

- Get the current game state:
curl -X GET http://localhost:3000/api/games/:gameId

πŸ’• Try it out!
- Demo: eienmojiki/2048
- Source: https://github.com/kogakisaki/koga-2048
- You can try out the API by running the server locally or using a tool like Postman to send requests to the API. I hope you enjoy playing 2048 with this API!

Let me know if you have any questions or feedback!

🐧 Mouse1 is our friend🐧
ImranzamanMLΒ 
posted an update about 2 months ago
view post
Post
559
Deep understanding of (C-index) evaluation measure for better model
Lets start with three patients groups:

Group A
Group B
Group C
For each patient, we will predict risk score (higher score means higher risk of early event).

Step 1: Understanding Concordance Index
The Concordance Index (C-index) evaluate that how well the model ranks survival times.

Understand with sample data:
Group A has 3 patients with actual survival times and predicted risk scores:

Patient Actual Survival Time Predicted Risk Score
P1 5 months 0.8
P2 3 months 0.9
P3 10 months 0.2
Comparable pairs:

(P1, P2): P2 has a shorter survival time and a higher risk score β†’ Concordant βœ…
(P1, P3): P3 has a longer survival time and a lower risk score β†’ Concordant βœ…
(P2, P3): P3 has a longer survival time and a lower risk score β†’ Concordant βœ…
Total pairs = 3
Total concordant pairs = 3

C-index for Group A = Concordant pairs/Total pairs= 3/3 = 1.0

Step 2: Calculate C-index for All Groups
Repeat the process for all groups. For now we can assume:

Group A: C-index = 1.0
Group B: C-index = 0.8
Group C: C-index = 0.6
Step 3: Stratified Concordance Index
The Stratified Concordance Index combines the C-index scores of all groups and focusing on the following:

Average performance across groups (mean of C-indices).
Consistency across groups (low standard deviation of C-indices).
Formula:
Stratified C-index = Mean(C-index scores) - Standard Deviation(C-index scores)

Calculate the mean:
Mean=1.0 + 0.8 + 0.6/3 = 0.8

Calculate the standard deviation:
Standard Deviation= sqrt((1.0-0.8)^2 + (0.8-0.8)^2 + (0.6-0.8)^/3) = 0.16

Stratified C-index:
Stratified C-index = 0.8 - 0.16 = 0.64

Step 4: Interpret the Results
A high Stratified C-index means:

The model predicts well overall (high mean C-index).
  • 1 reply
Β·
lunarfluΒ 
posted an update about 2 months ago
not-lainΒ 
posted an update 3 months ago
view post
Post
2294
ever wondered how you can make an API call to a visual-question-answering model without sending an image url πŸ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
πŸ”— https://github.com/not-lain/loadimg

API request example πŸ› οΈ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
TonicΒ 
posted an update 3 months ago
view post
Post
3561
πŸ™‹πŸ»β€β™‚οΈhey there folks,

periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal spaces behavior on load or launch ⚠️

we have a thread πŸ‘‰πŸ» https://discord.com/channels/879548962464493619/1295847667515129877

if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps πŸ€—πŸ€—πŸ€—
  • 2 replies
Β·
TonicΒ 
posted an update 3 months ago
view post
Post
1168
boomers still pick zenodo.org instead of huggingface ??? absolutely clownish nonsense , my random datasets have 30x more downloads and views than front page zenodos ... gonna write a comparison blog , but yeah... cringe.
  • 1 reply
Β·
ImranzamanMLΒ 
posted an update 3 months ago
view post
Post
716
Easy steps for an effective RAG pipeline with LLM models!
1. Document Embedding & Indexing
We can start with the use of embedding models to vectorize documents, store them in vector databases (Elasticsearch, Pinecone, Weaviate) for efficient retrieval.

2. Smart Querying
Then we can generate query embeddings, retrieve top-K relevant chunks and can apply hybrid search if needed for better precision.

3. Context Management
We can concatenate retrieved chunks, optimize chunk order and keep within token limits to preserve response coherence.

4. Prompt Engineering
Then we can instruct the LLM to leverage retrieved context, using clear instructions to prioritize the provided information.

5. Post-Processing
Finally we can implement response verification, fact-checking and integrate feedback loops to refine the responses.

Happy to connect :)