Home Blog Page 48

US attorney general defends press subpoenas over classified leaks

0




Acting US Attorney General Todd Blanche on Tuesday defended issuing subpoenas to journalists as part of investigations into leaks of classified information. Blanche did not name any outlet in a post on X, but his remarks came a day after The Wall Street Journal reported receiving grand jury subpoenas tied to Iran war coverage.

Mike Novogratz’s Galaxy and Sharplink Launch $125M Ethereum-Powered DeFi Yield Fund

0
Mike Novogratz’s Galaxy and Sharplink Launch $125M Ethereum-Powered DeFi Yield Fund



Mike Novogratz’s digital asset firm Galaxy Digital and ETH treasury company Sharplink announced a non-binding memorandum of understanding to form the Galaxy Sharplink Onchain Yield Fund.

This new private investment vehicle will focus on DeFi liquidity protocols and other on-chain yield-generating strategies.

$125M Institutional Yield Fund

According to the official press release, Galaxy will act as the fund’s investment manager. The fund is expected to launch in the coming weeks with total commitments of $125 million. This includes $100 million from Sharplink’s staked Ethereum treasury and $25 million from Galaxy.

The strategy will focus on identifying high-yield opportunities across blockchain-based financial markets by allocating capital to selected on-chain applications. The structure is intended to allow Sharplink to maintain its Ethereum exposure while also generating returns from actively managed on-chain strategies.

Galaxy revealed that protocol selection, exposure sizing, and ongoing monitoring will be handled under its institutional research and risk management framework, which is also used across its lending, trading, and asset management operations. The company added that it has been deploying hundreds of millions of dollars into on-chain strategies since 2020 and is among the largest publicly traded firms actively allocating capital to decentralized finance and other blockchain-based investment opportunities.

Novogratz, Founder and CEO of Galaxy, stated,

“Institutional capital is moving onchain, and the infrastructure to support it has matured to a point where allocators can access yield, liquidity, and risk management with the same rigor they expect in traditional markets. Sharplink has built one of the most significant Ethereum treasuries among public companies, and we’re proud to partner with them to put that capital to work in a strategy designed to compound their core position.”

Meanwhile, Matthew Sheffield, Sharplink’s Chief Investment Officer, said that the latest move is an “extension of its treasury strategy into more active strategies.”

Q1 Financial Results

Sharplink currently ranks as the second-largest Ethereum treasury company, holding roughly 868,700 ETH, behind Bitmine, which holds about 5.21 million ETH. Alongside the fund announcement, it also reported a major jump in revenue to $12.1 million in Q1 2026 from just $0.7 million a year earlier, mainly due to its Ethereum treasury strategy. However, the company also posted a large net loss of $685.6 million, mostly because falling ETH prices created unrealized accounting losses and impairment charges on its holdings.

Sharplink said these were paper losses under accounting rules and did not mean it actually sold ETH at a loss or reduced its Ethereum holdings.

The post Mike Novogratz’s Galaxy and Sharplink Launch $125M Ethereum-Powered DeFi Yield Fund appeared first on CryptoPotato.

Threads finally gets a logo worthy of its ambitions

0


When Threads launched in 2023, it was almost entirely defined in relation to other platforms: It was an offshoot of Instagram, an alternative to Twitter, and a competitor to Bluesky. Three years later, the platform is finally ready to strike out on its own, starting with a few subtle but meaningful changes to its brand identity. 

This week, Threads quietly debuted a refreshed logo and wordmark, which officially rolled out to users on May 11. After some eagle-eyed fans noticed the small changes, Threads’ head of design Christopher Clare posted an explanation to the platform: “It’s been almost 3 years since Threads launched—essentially as a side project of Instagram—so we were due for an update that better reflects the brand and where it’s headed: a new, standalone era,” he wrote. 

[Image: Meta]

When Threads first joined the internet ecosystem, it made sense for the platform’s logo and wordmark to echo Instagram’s design. The look leveraged users’ familiarity with Instagram to boost sign-ups, which require an existing Instagram account. In the long term, though, it set Threads up with a kind of younger sibling identity that lived under Instagram’s shadow rather than outside it. 

The updated look is not a design revelation—but it is a signal that Meta Platforms (Threads’ parent company) thinks Threads is ready to establish a brand name of its own.

Threads’ moment of clarity

Threads launched on July 6, 2023, in the midst of a user firestorm over a slew of unpopular changes made to X (then Twitter) by its new owner Elon Musk. The fortuitous timing saw immediate results: The platform notched a record-breaking 100 million sign-ups in its first few days. At the time, Meta CEO Mark Zuckerberg wrote that his moonshot goal for the platform was an eventual one billion users.

After the initial frenzy of Musk-hate-fueled downloads, Threads sign-ups cooled off a little. On its first birthday, the platform had 175 million monthly active users. As of August 2025, though, that number had jumped up to 400 million. Meta is clearly investing in the platform’s development, testing new features like Snapchat-esque “ghost-posts” (introduced in October) and an algorithm adjuster called “Dear Algo” (introduced in February). 

[Image: Meta]

In just a few years, Threads has managed to cultivate its own audience and carve out a unique niche for itself. And, according to Clare, it was time that the platform’s look matched its size.

“Instagram was the on-ramp,” Clare says. “But as Threads has grown and developed its own community and product identity, the visual connection started working against us. Users weren’t always distinguishing Threads from Instagram content, and the brand wasn’t doing enough to communicate what Threads is for—public conversation. The refresh is a clarity move: making Threads instantly recognizable on its own, wherever it shows up.”

Designing for online dialogue

The changes to Threads’ look center around one key goal: excising a bit of the “Instagram” out of Threads.

The original Threads wordmark, Clare says, used a similar “weight, geometry, and upright posture to Instagram’s logotype—round, neutral, clean.” For this update, his team gave the wordmark an italic forward lean and reworked its angled terminals, giving them a chiseled effect that makes the whole word look like it’s zooming forward.

The previous threads logo (left) and update (right) [Images: Meta]

Meanwhile, the logo has undergone a more obvious treatment in collaboration with the design team Studio Nari. It’s still a stylized “@” symbol, but it’s now a bit more curvy and cocked to the side. The square-ish shape of the original looked like a close relative of the Instagram logo, whereas this new version is more of an acquaintance. 

“The new logo is drawn in one continuous line—no breaks, no separate strokes,” Clare says. “It’s a single path. That was an intentional choice: it reflects how conversation on Threads flows continuously. Like the wordmark, it leans forward. The overall shape is a simplification that’s designed to read cleanly at small sizes (app icon, notification badge) while carrying more energy than the previous version.”

In some ways, the most important element of Threads’ new look is not the actual visual change, but the obvious work that the team dedicated to understanding how Threads’ brand should look and feel outside of Instagram. Compared to Instagram’s visuals-based, design-forward feed, Threads is all about daily, fast-moving discourse. “It’s meant to feel like movement,” Claire explains of the new wordmark, “like conversation already in progress.”

Meet AntAngelMed: A 103B-Parameter Open-Source Medical Language Model Built on a 1/32 Activation-Ratio MoE Architecture

0


A team researchers from China have released AntAngelMed, a large open-source medical language model that the team describes as the largest and most capable of its kind currently available.

What Is AntAngelMed?

AntAngelMed is a medical-domain language model with 103 billion total parameters, but it does not activate all of those parameters during inference. Instead, it uses a Mixture-of-Experts (MoE) architecture with a 1/32 activation ratio, meaning only 6.1 billion parameters are active at any given time when processing a query.

It helps to know how MoE architectures work. In a standard dense model, every parameter participates in processing every token. In an MoE model, the network is divided into many ‘expert’ sub-networks, and a routing mechanism selects only a small subset of them to handle each input. This allows you to have a very large total parameter count — which typically correlates with strong knowledge capacity — while keeping the actual compute cost of inference proportional to the smaller active parameter count.

AntAngelMed inherits this design from Ling-flash-2.0, a base model developed by inclusionAI and guided by what the team calls Ling Scaling Laws. The specific optimizations layered on top include: refined expert granularity, a tuned shared expert ratio, attention balance mechanisms, sigmoid routing without auxiliary loss, an MTP (Multi-Token Prediction) layer, QK-Norm, and Partial-RoPE (Rotary Position Embedding applied to a subset of attention heads rather than all of them). According to the research team, these design choices together allow small-activation MoE models to deliver up to 7× efficiency compared to similarly sized dense architectures which means with only 6.1B activated parameters, AntAngelMed can match roughly 40B dense model performance. Separately, as output length grows during inference, the relative speed advantage can also reach 7× or more over dense models of comparable size.

Training Pipeline

AntAngelMed uses a three-stage training process designed to layer general language understanding on top of deep medical domain adaptation.

The first stage is continual pre-training on large-scale medical corpora, including encyclopedias, web text, and academic publications. This phase is built on top of the Ling-flash-2.0 checkpoint, giving the model a strong general reasoning foundation before medical specialization begins.

The second stage is Supervised Fine-Tuning (SFT), where the model is trained on a multi-source instruction dataset. This dataset mixes general reasoning tasks — math, programming, logic — to preserve chain-of-thought capabilities, alongside medical scenarios such as doctor–patient Q&A, diagnostic reasoning, and safety and ethics cases.

The third stage is Reinforcement Learning using the GRPO (Group Relative Policy Optimization) algorithm, combined with task-specific reward models. GRPO, originally introduced in the DeepSeekMath paper, is a variant of PPO that estimates baselines from group scores rather than a separate critic model, making it computationally lighter. Here, reward signals are designed to shape model behavior toward empathy, structured clinical responses, safety boundaries, and evidence-based reasoning — all with the goal of reducing hallucinations on medical questions.

Inference Performance

On H20 hardware, AntAngelMed exceeds 200 tokens per second, which the research team reports is approximately 3× faster than a 36 billion parameter dense model. With YaRN (Yet Another RoPE extensioN) extrapolation, it supports a 128K context length — long enough to handle full clinical documents, extended patient histories, or multi-turn medical dialogues.

The research team has also released an FP8 quantized version of the model. When this quantization is combined with EAGLE3 speculative decoding optimization, inference throughput at a concurrency of 32 improves significantly over FP8 alone: 71% on HumanEval, 45% on GSM8K, and 94% on Math-500. These benchmarks measure coding and math reasoning tasks — not medical tasks directly — but serve as proxies for the model’s general throughput stability across output types.

Benchmark Results

On HealthBench, the open-source medical evaluation benchmark from OpenAI that uses simulated multi-turn medical dialogues to measure real-world clinical performance, AntAngelMed ranks first among all open-source models and surpasses a range of top proprietary models as well, with a particularly significant advantage on the HealthBench-Hard subset.

On MedAIBench, an evaluation system maintained by China’s National Artificial Intelligence Medical Industry Pilot Facility, AntAngelMed ranks at the top level, with particularly strong scores in medical knowledge Q&A and medical ethics and safety categories.

On MedBench, a benchmark for Chinese healthcare LLMs covering 36 independently curated datasets and approximately 700,000 samples across five dimensions — medical knowledge question answering, medical language understanding, medical language generation, complex medical reasoning, and safety and ethics — AntAngelMed ranks first overall.

Marktechpost’s Visual Explainer

#aam-wrap { all: initial !important; display: block !important; }
#aam-wrap *, #aam-wrap *::before, #aam-wrap *::after { box-sizing: border-box !important; }

#aam-wrap .aam-shell {
display: block !important;
background: #111111 !important;
border: 1px solid #2a2a2a !important;
border-radius: 12px !important;
overflow: hidden !important;
max-width: 860px !important;
margin: 0 auto !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
-webkit-font-smoothing: antialiased !important;
}

/* ── TOP BAR ── */
#aam-wrap .aam-topbar {
display: flex !important;
align-items: center !important;
justify-content: space-between !important;
background: #0c0c0c !important;
border-bottom: 1px solid #222 !important;
padding: 13px 22px !important;
height: 48px !important;
}
#aam-wrap .aam-topbar-l {
display: flex !important;
align-items: center !important;
gap: 10px !important;
}
#aam-wrap .aam-badge {
display: inline-block !important;
background: #76B900 !important;
color: #111111 !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 9px !important;
font-weight: 800 !important;
letter-spacing: 2.5px !important;
text-transform: uppercase !important;
padding: 3px 9px !important;
border-radius: 3px !important;
line-height: 1.4 !important;
}
#aam-wrap .aam-name {
display: inline-block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11px !important;
color: #555555 !important;
letter-spacing: 0.3px !important;
line-height: 1 !important;
}
#aam-wrap .aam-counter {
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11px !important;
color: #444444 !important;
letter-spacing: 1px !important;
line-height: 1 !important;
}
#aam-wrap .aam-counter b {
color: #76B900 !important;
font-weight: 700 !important;
}

/* ── PROGRESS BAR ── */
#aam-wrap .aam-prog-track {
display: block !important;
height: 2px !important;
background: #1c1c1c !important;
width: 100% !important;
}
#aam-wrap .aam-prog-fill {
display: block !important;
height: 2px !important;
background: #76B900 !important;
transition: width 0.4s ease !important;
}

/* ── VIEWPORT ── */
#aam-wrap .aam-viewport {
display: block !important;
overflow: hidden !important;
width: 100% !important;
background: #111111 !important;
}
#aam-wrap .aam-track {
display: flex !important;
flex-direction: row !important;
flex-wrap: nowrap !important;
transition: transform 0.42s cubic-bezier(0.4, 0, 0.2, 1) !important;
will-change: transform !important;
background: #111111 !important;
}

/* ── SLIDES ── */
#aam-wrap .aam-slide {
display: block !important;
flex-shrink: 0 !important;
width: 100% !important;
padding: 34px 32px 30px !important;
background: #111111 !important;
min-height: 400px !important;
}

/* slide label */
#aam-wrap .aam-slide-label {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 9px !important;
font-weight: 700 !important;
letter-spacing: 3px !important;
text-transform: uppercase !important;
color: #76B900 !important;
margin: 0 0 10px 0 !important;
padding: 0 !important;
line-height: 1.2 !important;
}

/* slide title */
#aam-wrap .aam-slide-title {
display: block !important;
font-family: ‘DM Sans’, ‘Segoe UI’, Arial, sans-serif !important;
font-size: 21px !important;
font-weight: 800 !important;
color: #ffffff !important;
line-height: 1.25 !important;
margin: 0 0 6px 0 !important;
padding: 0 !important;
letter-spacing: -0.3px !important;
}

/* slide subtitle */
#aam-wrap .aam-slide-sub {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 12.5px !important;
color: #555555 !important;
line-height: 1.55 !important;
margin: 0 0 24px 0 !important;
padding: 0 !important;
}

/* body text */
#aam-wrap .aam-p {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 13px !important;
color: #b8b8b8 !important;
line-height: 1.75 !important;
margin: 0 0 14px 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-p:last-child { margin-bottom: 0 !important; }
#aam-wrap .aam-p strong { color: #ffffff !important; font-weight: 700 !important; }
#aam-wrap .aam-p em { color: #76B900 !important; font-style: normal !important; font-weight: 600 !important; }

/* ── STAT STRIP ── */
#aam-wrap .aam-stats {
display: flex !important;
flex-direction: row !important;
gap: 10px !important;
margin: 0 0 22px 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-stat {
display: flex !important;
flex-direction: column !important;
align-items: center !important;
flex: 1 !important;
background: #191919 !important;
border: 1px solid #272727 !important;
border-top: 2px solid #76B900 !important;
border-radius: 8px !important;
padding: 16px 12px !important;
text-align: center !important;
}
#aam-wrap .aam-stat-val {
display: block !important;
font-family: ‘DM Sans’, ‘Segoe UI’, Arial, sans-serif !important;
font-size: 22px !important;
font-weight: 900 !important;
color: #76B900 !important;
line-height: 1.1 !important;
margin: 0 0 5px 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-stat-lbl {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 9.5px !important;
color: #555555 !important;
text-transform: uppercase !important;
letter-spacing: 1.5px !important;
line-height: 1.3 !important;
margin: 0 !important;
padding: 0 !important;
}

/* ── TAGS ── */
#aam-wrap .aam-tags {
display: flex !important;
flex-wrap: wrap !important;
gap: 6px !important;
margin: 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-tag {
display: inline-block !important;
background: #1a1a1a !important;
border: 1px solid #2c2c2c !important;
color: #76B900 !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11px !important;
font-weight: 600 !important;
padding: 4px 10px !important;
border-radius: 4px !important;
line-height: 1.4 !important;
}

/* ── PIPELINE STEPS ── */
#aam-wrap .aam-steps {
display: flex !important;
flex-direction: column !important;
gap: 10px !important;
margin: 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-step {
display: block !important;
background: #181818 !important;
border: 1px solid #262626 !important;
border-left: 3px solid #76B900 !important;
border-radius: 6px !important;
padding: 14px 16px !important;
}
#aam-wrap .aam-step-num {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 9px !important;
font-weight: 700 !important;
color: #76B900 !important;
letter-spacing: 2px !important;
text-transform: uppercase !important;
margin: 0 0 3px 0 !important;
padding: 0 !important;
line-height: 1.3 !important;
}
#aam-wrap .aam-step-title {
display: block !important;
font-family: ‘DM Sans’, ‘Segoe UI’, Arial, sans-serif !important;
font-size: 12.5px !important;
font-weight: 700 !important;
color: #ffffff !important;
margin: 0 0 5px 0 !important;
padding: 0 !important;
line-height: 1.3 !important;
}
#aam-wrap .aam-step-body {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 12px !important;
color: #888888 !important;
line-height: 1.65 !important;
margin: 0 !important;
padding: 0 !important;
}

/* ── PERF CARDS ── */
#aam-wrap .aam-perf-grid {
display: grid !important;
grid-template-columns: 1fr 1fr !important;
gap: 10px !important;
margin: 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-perf-card {
display: block !important;
background: #181818 !important;
border: 1px solid #262626 !important;
border-radius: 8px !important;
padding: 16px 14px !important;
}
#aam-wrap .aam-perf-val {
display: block !important;
font-family: ‘DM Sans’, ‘Segoe UI’, Arial, sans-serif !important;
font-size: 18px !important;
font-weight: 800 !important;
color: #76B900 !important;
margin: 0 0 5px 0 !important;
padding: 0 !important;
line-height: 1.2 !important;
}
#aam-wrap .aam-perf-desc {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11.5px !important;
color: #666666 !important;
line-height: 1.55 !important;
margin: 0 !important;
padding: 0 !important;
}

/* ── TABLE ── */
#aam-wrap .aam-table {
display: table !important;
width: 100% !important;
border-collapse: collapse !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 12px !important;
margin: 0 !important;
padding: 0 !important;
background: transparent !important;
}
#aam-wrap .aam-table thead { display: table-header-group !important; }
#aam-wrap .aam-table tbody { display: table-row-group !important; }
#aam-wrap .aam-table tr { display: table-row !important; }
#aam-wrap .aam-table th {
display: table-cell !important;
background: #181818 !important;
color: #76B900 !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 9px !important;
text-transform: uppercase !important;
letter-spacing: 1.5px !important;
padding: 10px 12px !important;
text-align: left !important;
border-bottom: 1px solid #2a2a2a !important;
font-weight: 700 !important;
line-height: 1.4 !important;
}
#aam-wrap .aam-table td {
display: table-cell !important;
padding: 11px 12px !important;
border-bottom: 1px solid #1e1e1e !important;
color: #aaaaaa !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11.5px !important;
line-height: 1.6 !important;
vertical-align: top !important;
background: transparent !important;
}
#aam-wrap .aam-table tr:last-child td { border-bottom: none !important; }
#aam-wrap .aam-table td strong { color: #eeeeee !important; font-weight: 700 !important; }
#aam-wrap .aam-table td.aam-td-rank { color: #76B900 !important; font-weight: 700 !important; }
#aam-wrap .aam-table td small {
display: block !important;
font-size: 10px !important;
color: #444444 !important;
margin: 2px 0 0 0 !important;
padding: 0 !important;
line-height: 1.3 !important;
}

/* ── CODE ── */
#aam-wrap pre.aam-pre {
display: block !important;
background: #0d0d0d !important;
border: 1px solid #222222 !important;
border-left: 3px solid #76B900 !important;
border-radius: 6px !important;
padding: 16px 18px !important;
overflow-x: auto !important;
margin: 0 0 14px 0 !important;
white-space: pre !important;
word-wrap: normal !important;
}
#aam-wrap pre.aam-pre code {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11px !important;
color: #c8e6a0 !important;
line-height: 1.7 !important;
background: none !important;
border: none !important;
padding: 0 !important;
margin: 0 !important;
white-space: pre !important;
}

/* ── LINKS ── */
#aam-wrap .aam-links {
display: flex !important;
flex-wrap: wrap !important;
gap: 8px !important;
margin: 0 0 22px 0 !important;
padding: 0 !important;
}
#aam-wrap a.aam-link {
display: inline-flex !important;
align-items: center !important;
gap: 6px !important;
background: #1a1a1a !important;
border: 1px solid #2e2e2e !important;
color: #76B900 !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11.5px !important;
font-weight: 600 !important;
padding: 9px 14px !important;
border-radius: 6px !important;
text-decoration: none !important;
line-height: 1 !important;
cursor: pointer !important;
}
#aam-wrap a.aam-link:hover { background: #222222 !important; color: #8fd400 !important; text-decoration: none !important; }

/* ── CREDIT BOX ── */
#aam-wrap .aam-credit {
display: block !important;
background: #181818 !important;
border: 1px solid #222222 !important;
border-radius: 8px !important;
padding: 14px 16px !important;
margin: 0 !important;
}
#aam-wrap .aam-credit p {
display: block !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
font-size: 11px !important;
color: #555555 !important;
line-height: 1.7 !important;
margin: 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-credit p em { color: #76B900 !important; font-style: normal !important; }

/* ── BOTTOM NAV ── */
#aam-wrap .aam-nav {
display: flex !important;
align-items: center !important;
justify-content: space-between !important;
background: #0c0c0c !important;
border-top: 1px solid #1e1e1e !important;
padding: 13px 22px !important;
height: 56px !important;
}
#aam-wrap .aam-dots {
display: flex !important;
align-items: center !important;
gap: 6px !important;
margin: 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-dot {
display: block !important;
width: 6px !important;
height: 6px !important;
border-radius: 50% !important;
background: #2e2e2e !important;
border: none !important;
padding: 0 !important;
margin: 0 !important;
cursor: pointer !important;
transition: background 0.2s, width 0.25s !important;
outline: none !important;
}
#aam-wrap .aam-dot.on {
background: #76B900 !important;
width: 20px !important;
border-radius: 3px !important;
}
#aam-wrap .aam-btns {
display: flex !important;
gap: 8px !important;
margin: 0 !important;
padding: 0 !important;
}
#aam-wrap .aam-btn {
display: inline-flex !important;
align-items: center !important;
justify-content: center !important;
width: 36px !important;
height: 36px !important;
background: #1c1c1c !important;
border: 1px solid #2c2c2c !important;
border-radius: 6px !important;
color: #76B900 !important;
font-size: 15px !important;
font-family: ‘DM Mono’, ‘Courier New’, monospace !important;
cursor: pointer !important;
line-height: 1 !important;
padding: 0 !important;
margin: 0 !important;
outline: none !important;
transition: background 0.2s !important;
}
#aam-wrap .aam-btn:hover { background: #242424 !important; }
#aam-wrap .aam-btn:disabled { opacity: 0.25 !important; cursor: default !important; }

/* ── DIVIDER ── */
#aam-wrap .aam-divider {
display: block !important;
height: 1px !important;
background: #202020 !important;
border: none !important;
margin: 20px 0 !important;
padding: 0 !important;
}

/* ── MOBILE ── */
@media (max-width: 640px) {
#aam-wrap .aam-slide { padding: 24px 18px 20px !important; min-height: 460px !important; }
#aam-wrap .aam-slide-title { font-size: 17px !important; }
#aam-wrap .aam-stats { flex-direction: row !important; flex-wrap: wrap !important; }
#aam-wrap .aam-stat { min-width: calc(50% – 5px) !important; }
#aam-wrap .aam-perf-grid { grid-template-columns: 1fr !important; }
#aam-wrap .aam-topbar, #aam-wrap .aam-nav { padding: 12px 16px !important; }
#aam-wrap .aam-table { font-size: 11px !important; }
#aam-wrap .aam-table th, #aam-wrap .aam-table td { padding: 8px 9px !important; }
}

Technical Guide
AntAngelMed

1 / 7

01 — Overview
What Is AntAngelMed?
Jointly developed by Health Information Center of Zhejiang Province, Ant Healthcare, and Zhejiang Anzhen’er Medical AI Technology Co., Ltd.

103BTotal Params
6.1BActive at Inference
128KContext Length

AntAngelMed is a medical-domain LLM built on a 1/32 activation-ratio MoE architecture. With 103B total parameters and only 6.1B active at inference time, it matches the performance of roughly 40B dense models at a fraction of the compute cost.

Model weights are released under Apache 2.0. The code repository is licensed under MIT.

02 — Architecture
MoE Architecture & Base Model
Built on Ling-flash-2.0 by inclusionAI, guided by Ling Scaling Laws.

AntAngelMed uses a 1/32 activation-ratio MoE with optimizations across all core components. These choices enable small-activation MoE models to deliver up to 7× efficiency over similarly sized dense architectures — and as output length grows, relative speedups can reach 7× or more.

Key architectural components:

Expert Granularity
Shared Expert Ratio
Sigmoid Routing
No Auxiliary Loss
MTP Layer
QK-Norm
Partial-RoPE
YaRN Extrapolation
Attention Balance

03 — Training
Three-Stage Training Pipeline
Designed to layer general language understanding on top of deep medical domain adaptation.

Stage 01
Continual Pre-Training
Built on Ling-flash-2.0, trained on large-scale medical corpora — encyclopedias, web text, and academic publications — to inject deep domain and world knowledge.
Stage 02
Supervised Fine-Tuning (SFT)
Multi-source instruction data mixing general tasks (math, programming, logic) for chain-of-thought, plus medical scenarios (doctor–patient Q&A, diagnostic reasoning, safety/ethics) for clinical adaptation.
Stage 03
Reinforcement Learning via GRPO
Group Relative Policy Optimization with task-specific reward models. Shapes model behavior toward empathy, structural clarity, safety boundaries, and evidence-based reasoning to reduce hallucinations.

04 — Inference
Inference Performance
Hardware benchmarks on H20 and throughput improvements from FP8 + EAGLE3 optimization.

>200 tok/s
On H20 hardware. Approximately 3× faster than a comparable 36B dense model.
7× efficiency
MoE vs. dense at equivalent size. Speedup increases further as output length grows.
+71% / +45% / +94%
FP8 + EAGLE3 throughput gains over FP8 alone on HumanEval / GSM8K / Math-500 at concurrency 32.
128K context
Supported via YaRN extrapolation. Handles full clinical documents and extended multi-turn dialogues.

05 — Benchmarks
Benchmark Results
Evaluated across three authoritative medical LLM benchmarks.

BenchmarkScopeResult
HealthBenchOpenAISimulated multi-turn medical dialogues for real-world clinical performance.#1 open-source; surpasses several proprietary models. Largest lead on HealthBench-Hard.
MedAIBenchNat’l AI Medical Pilot FacilityChinese authority benchmark covering knowledge Q&A and medical ethics/safety.Top-level. Strongest in knowledge Q&A and medical ethics/safety.
MedBenchChinese Healthcare Domain36 datasets, ~700K samples across 5 clinical dimensions.#1 overall across all 5 dimensions.

06 — Quickstart
Run with Hugging Face Transformers
Requires trust_remote_code=True for the MoE routing code.

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "MedAIBase/AntAngelMed",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("MedAIBase/AntAngelMed")

messages = [
  {"role": "system", "content": "You are AntAngelMed, a helpful medical assistant."},
  {"role": "user",   "content": "What should I do if I have a headache?"}
]
text   = tokenizer.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt",
    return_token_type_ids=False).to(model.device)
out    = model.generate(**inputs, max_new_tokens=16384)
out    = [o[len(i):] for i, o in zip(inputs.input_ids, out)]
print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])

Also supports: vLLM v0.11.0 (4-GPU tensor parallel), SGLang with FlashAttention-3, and vLLM-Ascend for Huawei Ascend 910B NPUs.

07 — Access
Resources & Links
Model weights Apache 2.0 — Code repository MIT — FP8 quantized variant available separately.

Developed by Health Information Center of Zhejiang Province, Ant Healthcare, and Zhejiang Anzhen’er Medical AI Technology Co., Ltd.
Coverage by Marktechpost — marktechpost.com


(function(){
var TOTAL = 7;
var cur = 0;
var vp = document.getElementById(‘aam-vp’);
var track = document.getElementById(‘aam-track’);
var prog = document.getElementById(‘aam-prog’);
var curEl = document.getElementById(‘aam-cur’);
var prev = document.getElementById(‘aam-prev’);
var next = document.getElementById(‘aam-next’);
var dots = document.getElementById(‘aam-dots’);

function getSlideW(){ return vp.offsetWidth; }

// Build dots
for(var i = 0; i < TOTAL; i++){
var d = document.createElement('button');
d.className = 'aam-dot' + (i === 0 ? ' on' : '');
d.setAttribute('data-i', i);
(function(idx){
d.addEventListener('click', function(){ go(idx); });
})(i);
dots.appendChild(d);
}

function go(idx){
if(idx = TOTAL) return;
cur = idx;
track.style.transform = ‘translateX(-‘ + (cur * getSlideW()) + ‘px)’;
prog.style.width = ((cur + 1) / TOTAL * 100) + ‘%’;
curEl.textContent = cur + 1;
prev.disabled = cur === 0;
next.disabled = cur === TOTAL – 1;
var allDots = dots.querySelectorAll(‘.aam-dot’);
for(var i = 0; i 40) go(diff > 0 ? cur + 1 : cur – 1);
}, {passive: true});
})();

Key Takeaways

  • AntAngelMed is a 103B-parameter open-source medical LLM that activates only 6.1B parameters at inference time using a 1/32 activation-ratio MoE architecture inherited from Ling-flash-2.0.
  • It uses a three-stage training pipeline: continual pre-training on medical corpora, SFT with mixed general and clinical instruction data, and GRPO-based reinforcement learning for safety and diagnostic reasoning.
  • On H20 hardware, the model exceeds 200 tokens/s and supports 128K context length via YaRN extrapolation — roughly 3× faster than a comparable 36B dense model.
  • AntAngelMed ranks first among open-source models on OpenAI’s HealthBench, surpasses several proprietary models, and tops both MedAIBench and MedBench leaderboards.
  • The model is available on Hugging Face, ModelScope, and GitHub; model weights are Apache 2.0, code is MIT, and an FP8 quantized version is also released.

Check out the Model Weights on HF, GitHub Repo and Technical details. Also, feel free to follow us on Twitter and don’t forget to join our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us

The post Meet AntAngelMed: A 103B-Parameter Open-Source Medical Language Model Built on a 1/32 Activation-Ratio MoE Architecture appeared first on MarkTechPost.

Trump 'could choose to change Taiwan policy in Beijing' unhindered by Congress, analyst says

0




Speaking with FRANCE 24’s Sharon Gaffney, David Sacks, Fellow for Asia studies at the Council on Foreign Relations, explains that Trump “could choose to change US policy towards Taiwan and Congress only has so many things that it can do to try to rein that in”. U.S. President Donald Trump and Chinese ​President Xi ‌Jinping are holding two days of meetings on Thursday and ⁠Friday where the issue of Taiwan, which Beijing views as its own territory, is certain to come up.

Toncoin corrects 15% from $2.90 zenith: Here’s why deeper pullback is likely

0
Toncoin Corrects 15% from $2.90 Zenith: Here's why a deeper pullback remains likely



Toncoin has an unmistakeably bullish swing structure but the Bitcoin uncertainty could undo TON bulls’ efforts.

Table of Contents

0

This comprehensive guide will explore the science-backed strategies for sustainable weight loss, helping you understand how to achieve lasting results without resorting to extreme measures.

* Introduction: The Struggle for Lasting Weight Management
* Understanding Sustainable Weight Loss: More Than Just a Number
* The Holistic Approach: Body, Mind, and Emotion
* Why Diets Fail: The Cycle of Restriction and Deprivation
* The Pillars of Sustainable Weight Loss
* Mindful Nutrition: Nourishing Your Body from Within
* Embracing Whole Foods
* The Power of Plant-Based Eating
* Intuitive Eating: Listening to Your Body’s Wisdom
* Consistent Physical Activity: Moving for Life
* The Synergy of Cardio and Strength Training
* Finding Joy in Movement
* Prioritizing Sleep: The Unsung Hero of Wellness
* Stress Management: Calming the Inner Storm
* Debunking Myths and Setting the Record Straight
* Myths vs. Facts about Weight Loss
* Your Actionable Steps to a Healthier You
* Start Today Checklist

Introduction: The Struggle for Lasting Weight Management

The pursuit of a healthy weight is a journey many embark on, often filled with a confusing array of advice, quick fixes, and fleeting results. If you’ve ever felt frustrated by diets that promise the world but deliver temporary success, you’re not alone. The common experience is a cycle of restriction, temporary loss, and eventual regain – a pattern that can be demoralizing and detrimental to overall health. This guide is designed to offer a different path: one that focuses on sustainable, long-term weight management through a holistic approach that nourishes your body, mind, and emotions. By understanding the science-backed principles and integrating practical lifestyle changes, you can finally break free from the diet cycle and cultivate a healthier, more vibrant you.

Understanding Sustainable Weight Loss: More Than Just a Number

Sustainable weight loss is not about drastic calorie cuts or grueling exercise routines that leave you feeling depleted. It’s a comprehensive, long-term approach that prioritizes overall well-being. This means addressing the interconnectedness of your physical health, mental state, and emotional landscape.

The Holistic Approach: Body, Mind, and Emotion

A holistic approach recognizes that weight is influenced by a multitude of factors beyond just diet and exercise. It acknowledges the profound impact of stress, emotional eating patterns, and mental health on our body’s ability to regulate weight. By addressing these underlying issues, we create a more robust and lasting foundation for health.

Why Diets Fail: The Cycle of Restriction and Deprivation

Many popular diets fail because they are inherently unsustainable. They often involve severe calorie restriction, eliminate entire food groups, and foster a sense of deprivation. This can lead to metabolic slowdown, nutrient deficiencies, and an increased likelihood of binge eating once the diet ends. True sustainable weight loss focuses on building healthy habits that can be maintained for a lifetime, rather than short-term fixes.

The Pillars of Sustainable Weight Loss

Achieving and maintaining a healthy weight involves a multifaceted approach that integrates several key lifestyle components.

Mindful Nutrition: Nourishing Your Body from Within

Nutrition is the cornerstone of any successful weight management strategy. However, the focus should be on nourishment rather than strict deprivation.

Embracing Whole Foods

Prioritize whole, unprocessed foods rich in nutrients. This includes a wide variety of fruits, vegetables, lean proteins, and whole grains. These foods provide sustained energy, essential vitamins and minerals, and promote satiety.

The Power of Plant-Based Eating

Plant-based diets, rich in fiber and lower in calorie density, have shown significant promise for weight management. They contribute to feelings of fullness, improve glycemic control, and reduce the risk of chronic diseases. By emphasizing fruits, vegetables, whole grains, and legumes, you create a nutrient-dense eating pattern that supports long-term health.

Intuitive Eating: Listening to Your Body’s Wisdom

Intuitive eating is an approach that encourages you to trust your body’s internal hunger and fullness cues, moving away from restrictive diet rules. It involves making peace with food, challenging the “food police” in your mind, and honoring your body’s needs with gentle nutrition and joyful movement. This philosophy fosters a healthier relationship with food, free from guilt and shame. Mindful eating, a key component of intuitive eating, involves paying full attention to the experience of eating, savoring each bite, and recognizing physical hunger and satiety signals.

Consistent Physical Activity: Moving for Life

Regular physical activity is crucial not only for weight management but also for overall health and well-being.

The Synergy of Cardio and Strength Training

A combination of cardiovascular exercise and strength training offers the most comprehensive benefits. Cardio improves heart health and burns calories, while strength training builds muscle mass, which boosts metabolism and increases calorie expenditure even at rest. Aim for at least 150 minutes of moderate-intensity aerobic activity or 75 minutes of vigorous-intensity activity per week, along with strength training exercises targeting major muscle groups at least two days a week.

Finding Joy in Movement

The most effective form of exercise is one you enjoy and can sustain. Explore various activities like walking, swimming, dancing, yoga, or team sports to find what resonates with you. The focus should be on movement that feels good and enhances your overall well-being, rather than exercise as a form of punishment.

Prioritizing Sleep: The Unsung Hero of Wellness

Adequate sleep is foundational for health and plays a critical role in weight management. During sleep, your body repairs tissues, regulates hormones, and supports immune function. Chronic sleep deprivation can disrupt hormones that regulate hunger and satiety, leading to increased cravings and weight gain. Aim for 7-9 hours of quality sleep per night.

Stress Management: Calming the Inner Storm

Chronic stress can wreak havoc on your body, leading to increased cortisol levels and abdominal weight gain. Implementing stress management techniques such as mindfulness, meditation, deep breathing exercises, or engaging in hobbies can significantly impact your well-being and weight management efforts.

Debunking Myths and Setting the Record Straight

Navigating the world of weight loss can be confusing due to widespread misinformation. Understanding the facts can empower you to make informed decisions.

Myths vs. Facts about Weight Loss

* **Myth:** Quick weight loss is the most effective.
**Fact:** Sustainable weight loss is gradual and focuses on long-term lifestyle changes, typically 1-2 pounds per week. Extreme diets can be harmful and lead to weight regain.
* **Myth:** You need to eliminate entire food groups to lose weight.
**Fact:** A balanced diet that includes all macronutrients is essential for sustained energy and nutrient intake. Focusing on whole, nutrient-dense foods is key.
* **Myth:** Exercise is only effective if it’s intense.
**Fact:** Any form of movement contributes to overall health. Finding enjoyable activities and incorporating them consistently is more important than extreme intensity.
* **Myth:** You must cut out all “unhealthy” foods.
**Fact:** An intuitive and mindful approach allows for flexibility. Making occasional indulgences without guilt is part of a sustainable, healthy relationship with food.

Your Actionable Steps to a Healthier You

Embarking on a journey toward sustainable weight loss can seem overwhelming, but starting with small, manageable steps can make a significant difference.

Start Today Checklist

* [ ] **Hydrate:** Drink a glass of water upon waking and aim for consistent hydration throughout the day.
* [ ] **Nourish:** Incorporate at least one serving of vegetables or fruits into your next meal.
* [ ] **Move:** Go for a 10-15 minute walk or engage in any physical activity you enjoy.
* [ ] **Mindful Moment:** Practice 5 minutes of deep breathing or meditation.
* [ ] **Sleep Awareness:** Aim to go to bed 15 minutes earlier tonight.
* [ ] **Intuitive Thought:** Before your next meal, pause and ask yourself, “Am I truly hungry?”

By consistently implementing these strategies, you are well on your way to achieving sustainable weight loss and a profound improvement in your overall health and well-being. Remember, this is a journey, and progress, not perfection, is the ultimate goal.

Mark Zuckerberg’s Meta Connect 2026 playlist has the vibe of a cringy college party

0



Meta founder, chairman, and CEO Mark Zuckerberg announced on Tuesday that the company’s Meta Connect conference, which offers a glimpse into what the tech giant sees as the future, will take place September 23–24. The conference is typically a major event for the company. Last year, Meta used the stage to debut its AI glasses.

Though little is known about what Zuckerberg plans to showcase this year, he has at least offered a preview of the conference vibes via a new Spotify playlist.

Shared alongside the announcement, the “Connect 2026 Vibes” playlist consists of five extremely mainstream, EDM-adjacent pop tracks, including Jack Harlow’s new release “Say Hello” (perhaps best known for the terrible hat Harlow wore while promoting it), a remix of Tame Impala’s “Dracula,” and “Born Again” by Thai artist Lisa featuring Doja Cat and RAYE. The overall effect is less “visionary tech summit” and more “college party hosted by a startup accelerator.”

Zuckerberg has revealed increasingly more about his music taste in recent years, often while trying to project a looser, cooler public image. Last year, for his wife’s 40th birthday, he dressed up as Benson Boone. He also shared an acoustic version of “Get Low” by Lil Jon & The East Side Boyz that he recorded with T-Pain, with the pair billing themselves as “Z-Pain.”

His Spotify profile offers additional clues. Artists Zuckerberg follows include millennial-era staples like Taio Cruz, Gym Class Heroes, Cher Lloyd, and fun., alongside bigger mainstream names like Florence and the Machine, Drake, Lady Gaga, and Pitbull.

The only other playlist Zuckerberg has publicly shared, “2004 facebook coding jams,” paints a noticeably angstier picture, featuring tracks from Trapt, Hoobastank, and Linkin Park. (Zuck still follows Linkin Park co-founder Mike Shinoda’s solo work on Spotify.)

Zuckerberg’s image may have evolved over the years, from the Caesar cut to curls and oversized chains, but one thing has remained constant: “Harder, Better, Faster, Stronger” by Daft Punk. The track appears on both the Connect 2026 playlist and his old “2004 facebook coding jams,” making it feel the closest we’ll get to a personal mission statement.

Emmanuel Macron’s speaks about France's policies in Sahel countries

0




“I think we should have had that challenging dialogue sooner. And perhaps in such cases, we should have rethought our military presence sooner.” In his interview with FRANCE 24, RFI and TV5Monde, French President Emmanuel Macron reflected on the state of relations between France and the Sahel countries. Here’s a part of his interview.

Ethereum Underperforms BTC in Major Decoupling

0
BTC ETF Inflows


  • ETH/BTC ratio plunged below 0.02843 after hitting a 10-month low, which means that Ethereum is underperforming Bitcoin.
  • In the last few weeks, institutional investors have heavily invested money into Bitcoin ETFs, while ETH spot ETFs are still facing major outflows. This has created major divergence in the allocation of capital between the two major cryptocurrencies.
  • Some experts are suggesting that this is a major structural change in the crypto market instead of a short-term change.

Despite the upward momentum in the Bitcoin (BTC) price as it soared above $80,000, Ethereum is still facing a strong consolidation zone as it is trading between a tight range at around $2,280 to $2,330.

According to the official data on TradingView, the ratio between EthereumETH-2.33% and Bitcoin has dropped below 0.02843, which shows that the ratio is hitting a multi-month low. 

According to the tracker, the figure is around a 10-month low for the ETH/BTC ratio. The drop in the ratio shows that Ethereum is underperforming in comparison to Bitcoin. In recent months, the crypto market has experienced a roller coaster ride, where it dipped in the initial months of this year. However, in comparison to BTC, Ether experienced a major dip and is facing difficulty in the rebound. ETH has bled more against BTC in the last few months. The ratio has dropped impressively in the last few months, falling approximately 9.5% in the previous month. In the last 6 months, the ratio plunged by 15%. 

BitMart Shares Detailed Analysis on ETH/BTC Ratio

In the latest post on X, BitMart, a cryptocurrency exchange, shared its detailed analysis of the current ETH/BTC ratio. The exchange stated that the ETH to BTC ratio has dropped below a 10-month low, which mainly comes from divergent capital flows. 

BitMart has called this situation a structural shift instead of just a temporary change. The exchange mentioned that a huge divergence in institutional capital is changing how the two biggest cryptocurrencies are performing relative to each other. Amid this change, traders are also changing their trading strategies according to the current situation in the crypto market, as many think that Bitcoin will continue to decouple.

BitMart analyst stated in the post, “This divergence between the two largest cryptocurrencies highlights the importance of strategic portfolio management. The days of simply buying both and expecting correlated returns are over. Investors must now carefully analyze flow dynamics, on-chain metrics, and shifting narratives to identify true relative strength.”

Most institutional investors are diverting their money into Bitcoin-based investment products such as spot BTC exchange-traded funds (ETFs), leaving other altcoins in the dry state. In the last few weeks, spot BTC ETFs have had steady inflows thanks to easing geopolitical tension after the ceasefire between the U.S. and Iran. On the other hand, Ethereum is struggling to attract capital on the same scale as BTC.

BTC ETF Inflows

(Source: Coinglass)

In the last few weeks, BTC exchange-traded funds have witnessed a growth in institutional adoption with major inflows. This growing adoption among institutional investors is proving BTC’s position as “digital gold” and a store of value. 

For example, in early May 2026, BlackRock’s iShares Bitcoin Trust (IBIT) alone attracted hundreds of millions in inflows over just a few days. This represents a concentrated, high-velocity injection of capital directly into Bitcoin, establishing a powerful directional bias that Ethereum currently lacks,” stated in the post.

While ETH exchange-traded funds are also available in the market from the same issuers, these ETH ETFs have witnessed major outflows in the same time period. This shows that institutional investors are not putting their investment in the ETH spot ETF, as outflows reached around $555 million in one session. According to technical experts, these outflows are directly linked to regulatory uncertainty around the ETH tokens.

BTC is benefiting from improving macroeconomic conditions and the constant accumulation of tokens by treasuries. This advantage has helped BTC to accumulate more corporate and institutional money in comparison to ETH.

Bitcoin and Ethereum Supply Dynamics and Staking vs. Sell Pressure

Ethereum is holding a large percentage of its total supply locked in staking. According to the official data, there are around 40 million ETH tokens locked in staking. These staked tokens are cutting down the amount of liquid supply available for trading. This could create scarcity in the long run. However, this staking mechanism is not enough to offset other pressures facing the asset.

On the other hand, BTC is facing exchange inflows and selling pressure from long-term holders during different market cycles. Despite this, BTC is still holding its narrative as a scarce asset during the risk-off in the overall crypto market. 

However, if the Ethereum ecosystem grows in the upcoming time, then it might again regain its dominance like in the past. For example, in the 2021-2022 DeFi summer, ETH managed to outperform BTC during the same trading session. 

In 2021-2022, the Ethereum blockchain witnessed a sharp demand after the network experienced growth in the on-chain activities, thanks to DeFi protocols and non-fungible tokens (NFTs). During the peak time, the total value locked in DeFi soared above $100 billion while its gas fees were low.

Also Read: Circle’s New Arc Network Strategy Could Change Its Valuation

Recent Posts