Jean Louis
JLouisBiz
AI & ML interests
- LLM for sales, marketing, promotion
- LLM for Website Revision System
- increasing quality of communication with customers
- helping clients access information faster
- saving people from financial troubles
Recent Activity
reacted
to
hypothetical's
post
with š„
about 4 hours ago
We have updated our transcription model: https://huggingface.co/TheStageAI/thewhisper-large-v3-turbo
ā 6.00 WER on the English Open ASR Leaderboard
ā 4.74 WER on the Multilingual Open ASR Leaderboard
ā Beats NVIDIA Parakeet (6.34 WER) and Whisper-large-v3-turbo (7.8 WER)
ā Strong improvements in Arabic, Hindi, Chinese
ā Maintains quality with background and environmental noise
ā Optimized inference engines for NVIDIA and Apple
ā Hugging Face Transformers interface for easy use
ā Best-in-class speed on NVIDIA GPUs and power efficiency on Apple devices
ā NVIDIA Jetson Thor support
replied to
Ujjwal-Tyagi's
post
about 4 hours ago
We are sleepwalking into a crisis. I am deeply concerned about AI model safety right now because, as the community rushes to roll out increasingly powerful open-source models, we are completely neglecting the most critical aspect: safety. It seems that nobody is seriously thinking about the potential consequences of unregulated model outputs or the necessity of robust guardrails. We are essentially planting the seeds for our own destruction if we prioritize raw performance over security.
This negligence is terrifyingly evident when you look at the current landscape. Take Qwen Image 2512, for example; while it delivers undeniably strong performance, it has incredibly weak guardrails that make it dangerous to deploy. In stark contrast, Z Image might not get as much hype for its power, but it has much better safety guardrails than Qwen Image 2512.
It is imperative that the open-source community and developers recognize that capability without responsibility is a liability. We must actively work on protecting these models from bad actors who seek to exploit them for malicious purposes, such as generating disinformation, creating non-consensual imagery, or automating cyberattacks. It is no longer enough to simply release a powerful model; we must build layers of defense that make it resistant to jailbreaking and adversarial attacks. Developers need to prioritize alignment and robust filtering techniques just as much as they prioritize benchmark scores. We cannot hand such potent tools to the world without ensuring they have the safety mechanisms to prevent them from being turned against us.