ginigen-ai PRO

ginigen-ai

AI & ML interests

None yet

Recent Activity

liked a Space about 12 hours ago
FINAL-Bench/Gemma-4-Multi
reacted to SeaWolf-AI's post with ๐Ÿ”ฅ about 12 hours ago
๐Ÿ’Ž Gemma 4 Playground โ€” Dual Model Demo on ZeroGPU We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models โ€” directly on Hugging Face Spaces with ZeroGPU. https://huggingface.co/spaces/FINAL-Bench/Gemma-4-Multi ๐Ÿ‘‰ Try it now: FINAL-Bench/Gemma-4-Multi Two Models, One Space Switch between both Gemma 4 variants in a single interface: โšก Gemma 4 26B-A4B โ€” MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%. ๐Ÿ† Gemma 4 31B โ€” Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3. Features Vision โ€” Upload images for analysis, OCR, chart reading, document parsing Thinking Mode โ€” Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens System Prompts โ€” 6 presets (General, Code, Math, Creative, Translate, Research) or write your own Streaming โ€” Real-time token-by-token response via ZeroGPU Apache 2.0 โ€” Fully open, no restrictions Technical Details Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces.GPU โ€” no dedicated GPU needed. Both models support 256K context window and 140+ languages out of the box. Links - ๐Ÿค— Space: [FINAL-Bench/Gemma-4-Multi](https://huggingface.co/spaces/FINAL-Bench/Gemma-4-Multi) - ๐Ÿ“„ Gemma 4 26B-A4B: [google/gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it) - ๐Ÿ“„ Gemma 4 31B: [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it) - ๐Ÿ”ฌ DeepMind Blog: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)
reacted to SeaWolf-AI's post with ๐Ÿ‘ about 16 hours ago
๐Ÿงฌ Darwin-35B-A3B-Opus โ€” The Child That Surpassed Both Parents What if a merged model could beat both its parents? We proved it can. Darwin-35B-A3B-Opus is a 35B MoE model (3B active) built with our Darwin V5 engine โ€” the first evolution system that CT-scans parent models before merging them. ๐Ÿค— Model: https://huggingface.co/FINAL-Bench/Darwin-35B-A3B-Opus The result speaks for itself: GPQA Diamond 90.0%, versus Father (Qwen3.5-35B-A3B) at 84.2% and Mother (Claude 4.6 Opus Distilled) at 85.0%. That's +6.9% over Father and +5.9% over Mother. Not a tradeoff โ€” a genuine leap. Meanwhile, MMMLU sits at 85.0% (Father: 85.2%), multimodal is fully intact, and all 201 languages are preserved. How? Model MRI changed everything. Traditional merging is guesswork. Darwin V4 added evolution. Darwin V5 added X-ray vision. Model MRI scans each parent layer by layer and discovers: Mother's L34โ€“L38 is the reasoning engine (peak cosine distance), 50โ€“65% of Mother's experts are dead (killed by text-only distillation), and Father is a healthy generalist with every expert alive. The prescription: transplant Mother's reasoning brain at L38 (90% weight), replace her dead experts with Father's living ones, and let Father's router handle the output layer. Reasoning went up. Versatility stayed intact. No tradeoff โ€” just evolution. 35B total, 3B active (MoE) ยท GPQA Diamond 90.0% ยท MMMLU 85.0% (201 languages) ยท Multimodal Image & Video ยท 262K native context ยท 147.8 tok/s on H100 ยท Runs on a single RTX 4090 (Q4) ยท Apache 2.0 Darwin V5's full algorithm and technical details will be released alongside an upcoming paper. ๐Ÿš€ Live Demo: https://huggingface.co/spaces/FINAL-Bench/Darwin-35B-A3B-Opus ๐Ÿ† FINAL Bench Leaderboard: https://huggingface.co/spaces/FINAL-Bench/Leaderboard ๐Ÿ“Š ALL Bench Leaderboard: https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard Built by VIDRAFT ยท Supported by the Korean Government GPU Support Program
View all activity

Organizations

None yet