Google search engine
Home Blog Page 13

Florida authorities bust Manatee County illegal gambling network during enforcement operation

0
Florida sheriff investigator examines illegal gambling slot machines during Manatee County arcade enforcement raid and statewide Florida gaming crackdown


Florida sheriff investigator examines illegal gambling slot machines during Manatee County arcade enforcement raid and statewide Florida gaming crackdown

Florida gaming regulators and local investigators say a large illegal gambling operation in Manatee County has been dismantled after a coordinated raid that seized hundreds of machines and more than $120,000 in cash.

The Florida Gaming Control Commission worked alongside the Manatee County Sheriff’s Office and Homeland Security Investigations during the effort, which authorities called “Operation Silent Spin.” Officials announced Tuesday (May 12) that investigators confiscated 265 illegal gambling machines while also filing criminal charges and immigration detainers connected to the case.

Spin City Arcade accounted for the largest seizure. Investigators removed 155 illegal slot machines and recovered about $78,483 in cash from that location, according to the commission. Authorities also seized 61 illegal slot machines and roughly $24,157 from another arcade that was not publicly identified. At Mike’s Arcade, officers recovered 49 illegal slot machines and approximately $18,157.

Close-up of illegal gambling slot machine games seized during Florida gaming enforcement raids targeting underground arcades in Manatee County
Investigators seized hundreds of illegal gambling machines during Florida’s latest arcade enforcement crackdown. Credit: Manatee County Sheriff’s Office via YouTube from 2023 raid

State regulators say the operation reflects growing concerns about underground gambling businesses spreading across Florida. Investigators pointed to a recent armed robbery and shooting at another illegal gambling house in Manatee County where a customer was shot multiple times. Officials say those operations often involve large amounts of cash, weak security, narcotics activity, trafficking concerns, and schemes targeting customers.

“These illegal gambling businesses are not harmless storefront operations. They attract crime, generate illicit cash economies, and create serious public safety risks for surrounding communities,” said L. Carl Herold, Director of Gaming Enforcement for the FGCC.

Pressure builds for wider enforcement amid Manatee County illegal gambling raid

The Manatee County case follows several other recent gambling crackdowns across southwest Florida. Earlier enforcement actions that we previously reported included the seizure of 623 illegal gambling machines and another Lee County operation where investigators removed 134 illegal slot machines tied to arcade-style gambling businesses. Separate investigations in Lee County also focused on suspected illegal gambling arcades operating under storefront gaming models.

Officials say the cases are adding pressure on state lawmakers to approve more enforcement funding as budget negotiations continue in Tallahassee this week. The FGCC is requesting additional gaming enforcement squads, including a proposed unit dedicated to southwest Florida, where regulators say illegal gambling activity has surged.

“Operation Silent Spin sends a clear message: if you are operating illegal gambling in Florida, we are coming for your machines, your money, and your criminal enterprise,” Herold said. “As illegal gambling activity continues to grow statewide, including in southwest Florida, the FGCC remains committed to aggressively dismantling these operations before they evolve into larger organized criminal networks.”

Julie Brown, chair of the FGCC, said southwest Florida remains one of the agency’s top priorities as illegal gambling complaints continue increasing.

“Southwest Florida has seen one of the highest increases in illegal gambling activity in the state and is a prime location for our next enforcement squad,” Brown said. “As these operations continue to spread, the Commission must ensure it has the personnel and regional presence necessary to investigate complaints quickly, support local law enforcement partners, and protect Florida communities through sustained enforcement efforts.”

Featured image: Manatee County Sheriff’s Office via YouTube

The post Florida authorities bust Manatee County illegal gambling network during enforcement operation appeared first on ReadWrite.

Can’t wait for the Steam Machine? This AMD cube is here for a modest $4,000

0




Thunderobot’s new cube-shaped AI Mini Workstation packs AMD’s Ryzen AI Max+ 395, 128GB RAM, liquid cooling, and Steam Machine energy for nearly $4,000.

AI is changing who you should hire. Here’s how to get it right

0



“We need someone who’s done this before.”

Translation: we need someone who can absorb a strategic pivot, upskill personally for AI, manage a workforce whose skills and expectations are shifting, maintain execution velocity, and make faster and better decisions—with the same budget, the same headcount, and no additional runway.

That’s not a job description. That’s a superhero spec.

And the person most organizations reach for to fill it—the candidate with deep sector experience, the safe hire, the one who’s “done this before”—is often exactly wrong for what the role now requires.

The logic behind the experience filter is not irrational. Sector knowledge compresses ramp time. It signals credibility with peers. It reduces the number of things that can go wrong in the first ninety days. When the environment was stable and execution was the output, it was a reasonable proxy for readiness.

That environment is gone.

AI has compressed execution timelines and put judgment at the center of competitive advantage. The work that once required a team now requires one person with the right capabilities. And the capabilities that matter most—operating without a playbook, making decisions under uncertainty, building alignment across functions—are not what the sector-experience filter selects for. It selects for pattern reproduction. In roles that now require pattern disruption, that’s not risk reduction. It’s risk amplification.

New criteria

A recent Strategy Science study, summarized by HEC Paris, found that within-industry breadth combined with cross-functional experience predicts stronger strategic foresight than narrow same-sector depth—particularly under conditions of uncertainty. The implication is uncomfortable: the profile most organizations default to in hiring may be the profile least suited to the moment they’re hiring for.

The middle management layer is where this mismatch is most expensive.

These are the people being asked to do something genuinely unprecedented: translate executive vision into execution reality, interpret and validate AI outputs, manage a workforce in transition, and make judgment calls faster—simultaneously, at the same level of quality, with no increase in resources. Every one of those demands has escalated in the last two years. None of them has been removed.

According to Gartner research cited by HRDive, 75% of business managers are overwhelmed by growing responsibilities, and 82% of HR leaders say managers are not currently equipped to lead change. AI is not relieving this pressure. It is adding a new layer: managers must now decipher AI initiatives, test tools, validate outputs, and explain limitations upward—while managing fewer junior staff to absorb the work.

This is the job that exists. It was built incrementally, requirement by requirement, until it became something no single person was designed to do. And the response—find someone who has done this before in our industry—does not solve the problem. It fills the role with someone selected for the conditions that no longer apply.

The cost of the wrong hire

The economics make this harder to ignore than most leadership teams have allowed themselves to.

The visible cost of bringing in a judgment-first hire without deep sector background is real: structured onboarding, longer ramp time, investment in building context deliberately. Organizations weigh that cost and reach for the familiar.

What they are not weighing with the same rigor is the cost of the wrong hire. Research from the Recruitment and Employment Confederation, cited by Gatenby Sanderson, estimates that a mid-level manager earning around £42,000 can cost a business more than £132,000 once recruitment, training, wasted salary, and lost productivity are included. That figure does not capture decision drag—the slower decisions, the missed pivots, the team that stalled waiting for direction that never came with sufficient clarity.

Organizations are making some of their most consequential talent decisions without serious cost data on either side of the equation. The familiar choice feels cheaper. It often isn’t.

The supply side

The math is also running out on the supply side.

According to ATD, middle manager hiring has fallen 43% since 2022—more than three times the drop in entry-level hiring. The experienced cohort that has historically filled these roles is aging toward retirement. The replacement cohort is smaller and carries less of the accumulated sector depth that organizations currently require as a baseline. At the same time, Deloitte’s 2025 human capital research finds that the work itself is shifting—AI is automating administrative and coordination tasks, increasing the need for managers who can coach, interpret ambiguity, and build alignment across boundaries.

Organizations are trying to solve a new management problem with a labor-market assumption that is breaking down. The experienced sector hire will become harder to find, more expensive to attract, and less suited to the actual job—in that order, and faster than most hiring plans currently reflect.

None of this is an argument for discarding experience. There are roles where deep sector knowledge is genuinely non-negotiable—where regulatory context, technical domain, or client relationships make it irreplaceable. The problem is that organizations apply the sector-experience filter uniformly, across roles where it matters and roles where it has simply become the default. Most have never made that distinction explicitly.

The organizations making progress on this are not overhauling their entire talent strategy. They are running contained experiments: small teams, high-performing, curious, change-ready. They are measuring what happens when the hiring criteria shift. They are designing for learning before they design for scale.

That is how you find out whether the model works before the hiring math forces the issue.

The question worth sitting with: in your organization, which management roles genuinely require sector depth—and which are using it as a shortcut?

Have you ever asked?

LBank Reports $2.5 Billion Daily TradFi Trading Volume, Up 25% Since March

0
LBank Reports $2.5 Billion Daily TradFi Trading Volume, Up 25% Since March



[PRESS RELEASE – Singapore, Singapore, May 13th, 2026]

LBank, a leading global cryptocurrency exchange, has officially announced explosive growth in its TradFi business. LBank TradFi’s average daily trading volume has surpassed $2.5 billion, representing a 25% increase compared to March this year. This strong growth not only reflects the continued release of global demand for multi-asset trading but also further highlights LBank’s leading expansion pace and structural advantages in the convergence of crypto assets and traditional finance.

LBank TradFi now provides comprehensive coverage across core asset classes in global traditional financial markets, including stocks, 24H stocks, metals, commodities, and indices, with a total of 117 trading pairs forming a well-structured multi-asset trading matrix. The platform spans a wide range of underlying assets such as gold, silver, crude oil, agricultural products, major global indices, and leading US equities, offering users diversified cross-market trading opportunities and further strengthening its one-stop TradFi trading experience.

Precious metals continue to play a dominant role within LBank TradFi, serving as the primary driver of trading activity across the platform. According to the latest data, the top five traded assets are GOLD, XAUT, SILVER, XTI, and PAXG, with gold-related instruments maintaining a leading position in overall market activity and liquidity flow.

Notably, LBank TradFi has demonstrated significant structural liquidity depth across multiple asset classes. According to CoinGlass data, XBR, US2000, VIXINDEX, and SOYBEAN all rank No.1 in open interest across centralized exchanges, further validating LBank’s deep liquidity advantage and strong market capacity in TradFi assets.

“LBank TradFi’s rapid growth validates that our long-term investment in the convergence of traditional finance and digital assets is gradually delivering structural results,” said Eric He, LBank Community Angel Officer & Risk Control Advisor. “What we are building is not merely a trading product, but a global multi-asset trading infrastructure with deep liquidity, extensive asset coverage, and a highly efficient execution system.”

As global demand for diversified asset allocation continues to accelerate, LBank TradFi is progressively building a comprehensive trading infrastructure system. Through efficient liquidity aggregation and asset connectivity capabilities, it further bridges the trading boundaries between digital assets and traditional financial markets, driving the evolution of multi-asset trading from a fragmented structure toward an integrated framework, while enhancing unified global price discovery and capital allocation efficiency.

Moving forward, LBank will continue to expand the breadth of its TradFi asset coverage and deepen its cross-market connectivity capabilities, further strengthening liquidity aggregation and execution efficiency across global markets. This ongoing development will reinforce LBank’s role as a key multi-asset trading infrastructure and liquidity hub in the global financial ecosystem.

About LBank

Founded in 2015, LBank is a leading global cryptocurrency exchange serving over 20 million registered users in 160 countries and regions. With a daily trading volume exceeding $10.5 billion and 10 years of safety with zero security incidents, LBank is dedicated to providing a comprehensive and user-friendly trading experience. Through innovative trading solutions, the platform has enabled users to achieve average returns of over 130% on newly listed assets.

LBank has listed over 300 mainstream coins and more than 50 high-potential gems. Ranked No. 1 in 100x Gems, Highest Gains, and Meme Share, LBank leads the market with the fastest altcoin listings, unmatched liquidity, and industry-first trading guarantees, making it the go-to platform for crypto investors worldwide.

Users Can Follow LBank for Updates:

Website:

Twitter:

Telegram:

Instagram:

LinkedIn:

The post LBank Reports $2.5 Billion Daily TradFi Trading Volume, Up 25% Since March appeared first on CryptoPotato.



Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration

0


Most AI systems today work in turns. You type or speak, the model waits, processes your input, and then responds. That’s the entire interaction loop. Thinking Machines Lab, an AI research lab, is arguing that this model of interaction is a fundamental bottleneck. Thinking Machines Lab team introduced a research preview of a new class of system they call interaction models to address it. The main idea for their research is interactivity should be native to the model itself, not bolted on as an afterthought.

What’s Wrong with Turn-Based AI

If you’ve built anything with a language model or voice API, you’ve worked around the limitations of turn-based interaction. The model has no awareness of what’s happening while you’re still typing or speaking. It can’t see you pause mid-sentence, notice your camera feed, or react to something visual in real time. While the model is generating, it’s equally blind — perception freezes until it finishes or gets interrupted.

This creates a narrow channel for human-AI collaboration that limits how much of a person’s knowledge, intent, and judgment can reach the model, and how much of the model’s work can be understood.

To work around this, most real-time AI systems use a harness — a collection of separate components stitched together to simulate responsiveness. A common example is voice-activity detection (VAD), which predicts when a user has finished speaking so a turn-based model knows when to start generating. This harness is made out of components that are meaningfully less intelligent than the model itself, and it precludes capabilities like proactive visual reactions, speaking while listening, or responding to cues that are never explicitly stated aloud.

Thinking Machines Lab’s argument is a version of the ‘bitter lesson’ in machine learning: hand-crafted systems will eventually be outpaced by scaling general capabilities. For interactivity to scale with intelligence, it must be part of the model itself. With this approach, scaling a model makes it smarter and a better collaborator.

The Architecture: Multi-Stream, Micro-Turn Design

The system has two components working in parallel: an interaction model that maintains constant real-time exchange with the user, and a background model that handles deeper reasoning tasks asynchronously.

The interaction model is always on — continuously taking in audio, video, and text and producing responses in real time. When a task requires sustained reasoning (tool use, web search, longer-horizon planning), it delegates to the background model by sending a rich context package containing the full conversation — not a standalone query. Results stream back as the background model produces them, and the interaction model interleaves those updates into the conversation at a moment appropriate to what the user is currently doing, rather than as an abrupt context switch. Both models share their context throughout.

Think of it like one person who keeps you engaged in conversation while a colleague in the background looks something up and passes notes forward in real time.

The key architectural decision enabling this is time-aligned micro-turns. The interaction model continuously interleaves the processing of 200ms worth of input with the generation of 200ms worth of output. Rather than consuming a complete user turn and generating a complete response, both input and output are treated as streams processed in 200ms chunks. This is what allows the model to speak while listening, react to visual cues without being prompted verbally, handle true simultaneous speech, and make tool calls and browse the web while the conversation is still in progress — weaving results back in as they arrive.

Encoder-free early fusion is the specific design choice that makes multimodal processing work at this cadence. Rather than routing audio and video through large, separate pretrained encoders (like a Whisper-style ASR model or a standalone TTS decoder), the architecture uses minimal pre-processing. Audio signals are ingested as dMel and transformed via a lightweight embedding layer. Video frames are split into 40×40 patches encoded by an hMLP. Audio output uses a flow head for decoding. All components are co-trained from scratch together with the transformer — there is no separately pretrained encoder or decoder at any stage.

On the inference side, the 200ms chunk design creates engineering challenges. Existing LLM inference libraries aren’t optimized for frequent small prefills — they carry significant per-turn overhead. Thinking Machines implemented streaming sessions, where the client sends each 200ms chunk as a separate request while the inference server appends chunks into a persistent sequence in GPU memory, avoiding repeated memory reallocations and metadata computations. They’ve upstreamed a version of this to SGLang, the open-source inference framework. Additionally, they use a gather+gemv strategy for MoE kernels instead of standard grouped gemm, following prior work from PyTorch and Cursor, to optimize for the latency-sensitive shapes required by bidirectional serving.

Benchmarks: Where It Stands

The model, named TML-Interaction-Small, is a 276B parameter Mixture-of-Experts (MoE) with 12B active parameters.

The benchmark table distinguishes between Instant models (no extended reasoning) and Thinking models (with reasoning). TML-Interaction-Small is an Instant model. Among all Instant models in the comparison, it achieves the highest score on Audio MultiChallenge APR at 43.4% — above GPT-realtime-2.0 (minimal) at 37.6%, GPT-realtime-1.5 at 34.7%, and Gemini-3.1-flash-live-preview (minimal) at 26.8%. The Thinking models, GPT-realtime-2.0 (xhigh) at 48.5% and Gemini-3.1-flash-live (high) at 36.1%, use extended reasoning to achieve their scores.

On FD-bench v1.5, which measures interaction quality across user interruption, backchanneling, talking-to-others, and background speech scenarios, TML-Interaction-Small scores 77.8 average quality — compared to 54.3 for Gemini-3.1-flash-live (minimal), 48.3 for GPT-realtime-1.5, and 47.8 for GPT-realtime-2.0 (xhigh).

On FD-bench v1 turn-taking latency, the model responds in 0.40 seconds — compared to 0.57s for Gemini, 0.59s for GPT-realtime-1.5, and 1.18s for GPT-realtime-2.0 (minimal).

On FD-bench v3, which evaluates response quality and tool use (audio + tools combined), TML-Interaction-Small (with background agent enabled) scores 82.8% Response Quality / 68.0% Pass@1 — the highest in the comparison table.

Thinking Machines research team also introduced new internal benchmarks targeting capabilities that no existing model handles:

  • TimeSpeak — Tests whether the model initiates speech at user-specified times with correct content. TML: 64.7 macro-accuracy vs. 4.3 for GPT-realtime-2.0 (minimal).
  • CueSpeak — Tests whether the model responds to verbal cues at the correct moment. TML: 81.7 vs. 2.9.
  • RepCount-A (adapted from an existing repetition-counting dataset) — Tests visual counting of repeated physical actions in a streaming setting. TML: 35.4 off-by-one accuracy vs. 1.3.
  • ProactiveVideoQA (adapted benchmark) — Tests whether the model answers a question at the exact moment the answer becomes visually available in a streamed video. TML: 33.5 PAUC@ω=0.5 vs. 25.0 (the no-response baseline).
  • Charades (adapted for temporal action localization) — The model is asked to say “start” and “stop” as an action begins and ends in a streamed video. TML: 32.4 mIoU vs. 0 for GPT-realtime-2.0 (minimal) — a clean zero.

So far, no existing model can meaningfully perform any of these tasks.

Marktechpost’s Visual Explainer

#tml-ig-wrap *,#tml-ig-wrap *::before,#tml-ig-wrap *::after{box-sizing:border-box!important;margin:0!important;padding:0!important}
#tml-ig-wrap hr,#tml-ig-wrap p:empty,#tml-ig-wrap del,#tml-ig-wrap s{display:none!important}
#tml-ig-wrap{font-family:’Courier New’,monospace!important;background:#111!important;color:#e8e8e8!important;border-radius:12px!important;overflow:hidden!important;position:relative!important;width:100%!important;max-width:860px!important;margin:0 auto!important;border:1px solid #1e1e1e!important;box-shadow:0 24px 80px rgba(0,0,0,.7)!important}
#tml-ig-wrap .tml-top-bar{background:#0a0a0a!important;border-bottom:1px solid #1e1e1e!important;padding:14px 24px!important;display:flex!important;align-items:center!important;justify-content:space-between!important;gap:12px!important}
#tml-ig-wrap .tml-dots{display:flex!important;gap:7px!important;align-items:center!important}
#tml-ig-wrap .tml-dot{width:11px!important;height:11px!important;border-radius:50%!important;display:block!important}
#tml-ig-wrap .tml-dot-r{background:#ff5f57!important}
#tml-ig-wrap .tml-dot-y{background:#febc2e!important}
#tml-ig-wrap .tml-dot-g{background:#28c840!important}
#tml-ig-wrap .tml-label{font-size:11px!important;letter-spacing:.12em!important;color:#76B900!important;text-transform:uppercase!important;font-weight:700!important}
#tml-ig-wrap .tml-counter{font-size:11px!important;color:#444!important;letter-spacing:.08em!important}
#tml-ig-wrap .tml-viewport{overflow:hidden!important;position:relative!important}
#tml-ig-wrap .tml-track{display:flex!important;transition:transform .45s cubic-bezier(.4,0,.2,1)!important;will-change:transform!important}
#tml-ig-wrap .tml-slide{min-width:100%!important;padding:40px 48px 44px!important;position:relative!important}
#tml-ig-wrap .tml-slide-num{font-size:10px!important;letter-spacing:.18em!important;color:#76B900!important;text-transform:uppercase!important;margin-bottom:20px!important;display:block!important}
#tml-ig-wrap .tml-slide h2{font-size:22px!important;font-weight:700!important;color:#fff!important;line-height:1.25!important;margin-bottom:18px!important;font-family:’Courier New’,monospace!important;letter-spacing:-.01em!important}
#tml-ig-wrap .tml-slide h2 span{color:#76B900!important}
#tml-ig-wrap .tml-slide p{font-size:14px!important;line-height:1.75!important;color:#bbb!important;margin-bottom:14px!important}
#tml-ig-wrap .tml-slide p:last-child{margin-bottom:0!important}
#tml-ig-wrap .tml-divider{height:1px!important;background:linear-gradient(90deg,#76B900 0%,#1e1e1e 100%)!important;margin:22px 0!important;border:none!important;display:block!important}
#tml-ig-wrap .tml-grid{display:grid!important;grid-template-columns:1fr 1fr!important;gap:14px!important;margin-top:20px!important}
#tml-ig-wrap .tml-card{background:#161616!important;border:1px solid #242424!important;border-radius:8px!important;padding:16px 18px!important}
#tml-ig-wrap .tml-card .tml-card-title{font-size:11px!important;letter-spacing:.12em!important;text-transform:uppercase!important;color:#76B900!important;margin-bottom:8px!important;display:block!important;font-weight:700!important}
#tml-ig-wrap .tml-card p{font-size:13px!important;color:#999!important;line-height:1.6!important;margin-bottom:0!important}
#tml-ig-wrap .tml-list{list-style:none!important;margin-top:18px!important}
#tml-ig-wrap .tml-list li{font-size:14px!important;color:#bbb!important;line-height:1.7!important;padding:8px 0 8px 20px!important;border-bottom:1px solid #1a1a1a!important;position:relative!important}
#tml-ig-wrap .tml-list li:last-child{border-bottom:none!important}
#tml-ig-wrap .tml-list li::before{content:’▸’!important;color:#76B900!important;position:absolute!important;left:0!important;top:8px!important}
#tml-ig-wrap .tml-list li strong{color:#e8e8e8!important;font-weight:700!important}
#tml-ig-wrap .tml-badge{display:inline-block!important;background:rgba(118,185,0,.12)!important;border:1px solid rgba(118,185,0,.3)!important;color:#76B900!important;font-size:11px!important;padding:3px 10px!important;border-radius:4px!important;letter-spacing:.08em!important;text-transform:uppercase!important;margin-bottom:18px!important}
#tml-ig-wrap .tml-stat-row{display:flex!important;gap:12px!important;margin-top:20px!important}
#tml-ig-wrap .tml-stat{flex:1!important;background:#161616!important;border:1px solid #242424!important;border-radius:8px!important;padding:16px!important;text-align:center!important}
#tml-ig-wrap .tml-stat-val{font-size:26px!important;font-weight:700!important;color:#76B900!important;display:block!important;line-height:1!important;margin-bottom:6px!important}
#tml-ig-wrap .tml-stat-lbl{font-size:10px!important;color:#555!important;letter-spacing:.1em!important;text-transform:uppercase!important}
#tml-ig-wrap .tml-code{background:#0d0d0d!important;border:1px solid #222!important;border-left:3px solid #76B900!important;border-radius:6px!important;padding:14px 18px!important;font-size:12.5px!important;color:#a8d672!important;line-height:1.7!important;font-family:’Courier New’,monospace!important;margin:18px 0!important;overflow-x:auto!important;white-space:pre!important}
#tml-ig-wrap .tml-warn{background:rgba(255,190,0,.06)!important;border:1px solid rgba(255,190,0,.2)!important;border-radius:6px!important;padding:14px 18px!important;margin-top:18px!important}
#tml-ig-wrap .tml-warn .tml-warn-title{font-size:10px!important;letter-spacing:.12em!important;text-transform:uppercase!important;color:#febc2e!important;display:block!important;margin-bottom:6px!important;font-weight:700!important}
#tml-ig-wrap .tml-warn p{font-size:13px!important;color:#998866!important;line-height:1.6!important;margin-bottom:0!important}
#tml-ig-wrap .tml-nav{display:flex!important;align-items:center!important;justify-content:space-between!important;padding:16px 24px!important;background:#0a0a0a!important;border-top:1px solid #1e1e1e!important}
#tml-ig-wrap .tml-btn{background:transparent!important;border:1px solid #2a2a2a!important;color:#777!important;padding:8px 20px!important;border-radius:6px!important;font-size:12px!important;font-family:’Courier New’,monospace!important;cursor:pointer!important;letter-spacing:.06em!important;transition:all .2s!important}
#tml-ig-wrap .tml-btn:hover{border-color:#76B900!important;color:#76B900!important}
#tml-ig-wrap .tml-btn:disabled{opacity:.25!important;cursor:default!important}
#tml-ig-wrap .tml-btn-primary{background:#76B900!important;border-color:#76B900!important;color:#000!important;font-weight:700!important}
#tml-ig-wrap .tml-btn-primary:hover{background:#8fd400!important;border-color:#8fd400!important;color:#000!important}
#tml-ig-wrap .tml-pips{display:flex!important;gap:6px!important;align-items:center!important}
#tml-ig-wrap .tml-pip{width:6px!important;height:6px!important;border-radius:50%!important;background:#2a2a2a!important;cursor:pointer!important;transition:all .2s!important;border:none!important}
#tml-ig-wrap .tml-pip.active{background:#76B900!important;width:20px!important;border-radius:3px!important}
#tml-ig-wrap .tml-two-col{display:grid!important;grid-template-columns:1fr 1fr!important;gap:14px!important;margin-top:18px!important}
#tml-ig-wrap .tml-col-block{background:#161616!important;border:1px solid #242424!important;border-radius:8px!important;padding:18px!important}
#tml-ig-wrap .tml-col-block .tml-col-head{font-size:11px!important;letter-spacing:.1em!important;text-transform:uppercase!important;color:#76B900!important;margin-bottom:10px!important;display:block!important;font-weight:700!important}
#tml-ig-wrap .tml-col-block p{font-size:13px!important;color:#888!important;margin-bottom:0!important;line-height:1.65!important}
#tml-ig-wrap .tml-highlight{color:#76B900!important;font-weight:700!important}
@media(max-width:640px){
#tml-ig-wrap .tml-slide{padding:28px 22px 32px!important}
#tml-ig-wrap .tml-slide h2{font-size:18px!important}
#tml-ig-wrap .tml-grid{grid-template-columns:1fr!important}
#tml-ig-wrap .tml-two-col{grid-template-columns:1fr!important}
#tml-ig-wrap .tml-stat-row{flex-wrap:wrap!important}
#tml-ig-wrap .tml-stat{flex:calc(50% – 6px)!important;min-width:calc(50% – 6px)!important}
#tml-ig-wrap .tml-slide p{font-size:13px!important}
#tml-ig-wrap .tml-code{font-size:11.5px!important;overflow-x:auto!important}
#tml-ig-wrap .tml-nav{padding:14px 16px!important}
#tml-ig-wrap .tml-btn{padding:8px 14px!important;font-size:11px!important}
#tml-ig-wrap .tml-top-bar{padding:12px 16px!important}
}



Interaction Models — Getting Started Guide
01 / 07

01 — Overview

What Are Interaction Models?

Research Preview — May 2026

Thinking Machines Lab introduced interaction models — a new class of AI system where real-time interactivity is native to the model itself, not bolted on through external scaffolding.

Unlike standard LLM APIs that work in a request—response loop, interaction models continuously perceive and respond across audio, video, and text at the same time — the way a live human conversation works.

Standard LLM APIs

Turn-based. Model waits for your full input, then generates a full response. Perception freezes during generation.

Interaction Models

Continuous. The model perceives and responds in parallel in 200ms chunks — across audio, video, and text simultaneously.

02 — Architecture

How the Two-Model System Works

The system is built around two components that run in parallel and share the same context at all times.

Interaction Model

Always live. Receives audio, video, and text in continuous 200ms chunks. Handles conversation flow, interruptions, backchanneling, and immediate responses in real time.

Background Model

Runs asynchronously. Handles deep reasoning, tool calls, web search, and longer-horizon work. Receives the full conversation — not just a standalone query — and streams results back as they arrive.

The interaction model stays present during background tasks — taking new input, answering follow-ups, and weaving results into the conversation at the right moment, not as an abrupt context switch.

03 — Capabilities

What You Can Actually Do

Because interactivity is native to the model, these are built-in behaviors — not harness features:

  • Simultaneous speech — Speak and listen at the same time (e.g. live translation from Spanish to English as you talk)
  • Verbal interjections — Model jumps in mid-sentence based on context, not just when you stop talking
  • Visual proactivity — Model reacts to what it sees on camera without you saying anything (e.g. counting pushups, flagging a code bug it sees)
  • Time-awareness — Model tracks elapsed time and can initiate speech at user-specified moments
  • Concurrent tool use — Searches the web, calls tools, and generates UI while the conversation is still in progress
  • Seamless dialog management — Tracks pauses, self-corrections, and yield signals without a separate VAD component

04 — Technical Design

The Micro-Turn Architecture

For engineers curious about how this works under the hood, three design choices make real-time multimodal processing possible:

200ms micro-turns
——————————————
Input stream : [chunk 0][chunk 1][chunk 2][chunk 3]…
Output stream : [chunk 0][chunk 1][chunk 2][chunk 3]…
Interleaved : in_0 out_0 in_1 out_1 in_2 out_2…

Audio input : dMel + lightweight embedding layer
Video input : 40×40 patches via hMLP
Audio output : flow head decoder
All components co-trained from scratch with transformer

Rather than routing audio and video through large pretrained encoders (like Whisper), inputs are processed via minimal embeddings and co-trained from scratch — called encoder-free early fusion.

On the inference side, streaming sessions append each 200ms chunk into a persistent sequence in GPU memory, avoiding repeated memory reallocations and metadata computations per request. A version of this has been upstreamed to SGLang.

05 — Benchmarks

How TML-Interaction-Small Performs

The model is a 276B parameter MoE with 12B active parameters. Key results against other instant (non-thinking) real-time models:

77.8
FD-bench v1.5
Interaction Quality
0.40s
FD-bench v1
Turn Latency
43.4
Audio MultiChallenge
APR (best instant)
82.8%
FD-bench v3
Response Quality

On proactive/time-aware benchmarks where no existing model meaningfully performs: TimeSpeak 64.7, CueSpeak 81.7, RepCount-A 35.4, Charades mIoU 32.4 — vs. near-zero for all other tested models including GPT-realtime-2.0.

06 — Getting Access

How to Join the Preview

As of May 2026, Thinking Machines Lab is opening a limited research preview to collect feedback. A wider release is planned later in 2026.

  • Apply for early access — Contact the team via thinkingmachines.ai (email link on the blog post)
  • Research grant program — A research grant is available for work on interaction model benchmarks, evaluation frameworks, and human-AI collaboration research
  • Follow Thinking Machines Lab — Updates and wider release announcements at thinkingmachines.ai
  • Contribute benchmarks — The lab explicitly invites the community to develop new frameworks for measuring interactivity quality — an area they consider underserved
Note

This is a research preview, not a production API. Access is gated and limited during this phase.

07 — Limitations

What to Know Before You Build

Thinking Machines Lab is transparent about where the current system falls short:

Long Sessions

Continuous audio and video accumulate context fast. Very long sessions still require careful context management — an active area of work.

Network Dependency

Streaming at 200ms chunks requires reliable connectivity. Poor connections significantly degrade the experience.

Model Size

Larger pretrained models exist but are currently too slow to serve in real-time. Larger variants are planned for later in 2026.

Safety & Alignment

Real-time interaction opens new alignment research questions. Feedback collection is active. Harmbench refusal rate: 99.0%.

Source: Thinking Machines Lab, “Interaction Models: A Scalable Approach to Human-AI Collaboration,” May 2026 — thinkingmachines.ai/blog/interaction-models

Created & Designed by Marktechpost.com

(function(){
var total=7,cur=0;
var track=document.getElementById(‘tml-track’);
var pipsEl=document.getElementById(‘tml-pips’);
var counter=document.getElementById(‘tml-counter’);
var prevBtn=document.getElementById(‘tml-prev’);
var nextBtn=document.getElementById(‘tml-next’);
for(var i=0;i<total;i++){
var pip=document.createElement('button');
pip.className='tml-pip'+(i===0?' active':'');
pip.setAttribute('aria-label','Go to slide '+(i+1));
pip.setAttribute('data-i',i);
pip.onclick=function(){tmlGo(parseInt(this.getAttribute('data-i')));};
pipsEl.appendChild(pip);
}
function tmlGo(n){
cur=n;
track.style.transform='translateX(-'+cur*100+'%)';
counter.textContent=(cur+1<10?'0'+(cur+1):(cur+1))+' / '+(total<10?'0'+total:total);
prevBtn.disabled=cur===0;
nextBtn.textContent=cur===total-1?'Start Over ↺':'Next →';
var pips=pipsEl.querySelectorAll('.tml-pip');
for(var j=0;j<pips.length;j++) pips[j].className='tml-pip'+(j===cur?' active':'');
}
window.tmlNav=function(dir){
var next=(cur+dir+total)%total;
tmlGo(next);
};
})();

Key Takeaways

  • Thinking Machines Lab’s interaction model handles real-time audio, video, and text natively — no VAD harness, no turn boundaries, no stitched components.
  • The architecture splits into two models: an interaction model that stays live with the user, and a background model that handles reasoning and tool use asynchronously — sharing full conversation context throughout.
  • 200ms micro-turns replace the standard request-response loop, enabling simultaneous speech, visual proactivity, and live tool calls without waiting for a user turn to end.
  • On FD-bench v1.5 (interaction quality), TML-Interaction-Small scores 77.8 — versus 54.3 for Gemini and 47.8 for GPT-realtime-2.0 (xhigh) — while also leading all instant models on Audio MultiChallenge intelligence benchmarks.
  • Existing real-time APIs score near zero on time-awareness and visual proactivity benchmarks (TimeSpeak, CueSpeak, Charades, RepCount-A) — TML-Interaction-Small is the only model that can meaningfully perform these tasks today.

Check out the Technical details. Also, feel free to follow us on Twitter and don’t forget to join our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us

The post Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration appeared first on MarkTechPost.

Trump Heads to China as Iran War and Strait of Hormuz Crisis Escalate

0


U.S. President Donald Trump departed for China on Wednesday ahead of a high stakes summit with Chinese President Xi Jinping, while downplaying the need for Beijing’s involvement in efforts to resolve the ongoing Iran conflict. The visit comes at a critical moment as the war in Iran continues to disrupt global energy flows, particularly through […]

The post Trump Heads to China as Iran War and Strait of Hormuz Crisis Escalate appeared first on Modern Diplomacy.

XRP ETFs rebound with $25.8M inflows – Can falling supply fuel a rally?

0
XRP whale wallets hit ATH as ETF inflows surge-Will clarity drive lasting demand?



Whale accumulation and ETF demand increasingly reinforced XRP’s long-term conviction narrative.

New Mexico tribes challenge Kalshi sports contracts over alleged illegal gambling

0
Kalshi logo over New Mexico landscape as tribes sue prediction market company over alleged illegal sports betting contracts on tribal lands.


Kalshi logo over New Mexico landscape as tribes sue prediction market company over alleged illegal sports betting contracts on tribal lands.

Four New Mexico tribes are taking prediction market company Kalshi to federal court, arguing that its sports contracts amount to illegal online sports betting conducted on tribal lands without approval from tribal regulators.

The Mescalero Apache Tribe, Pueblo of Isleta, Pueblo of Pojoaque, and Pueblo of Sandia filed the lawsuit Tuesday (May 12) in federal court in New Mexico. The 33-page complaint, reviewed by ReadWrite, seeks court orders blocking Kalshi from offering sports event contracts on tribal lands, while three of the tribes are also pursuing civil penalties.

Tribal regulators say Kalshi’s mobile app lets users place wagers on sporting events through yes-or-no contracts that operate like standard sportsbook bets. The tribes argue the products qualify as Class III gaming under the Indian Gaming Regulatory Act, known as IGRA, and therefore require tribal authorization and compliance with existing tribal-state gaming compacts.

The document cites a 2016 Interior Department letter stating that “non-tribal Internet gaming” would conflict with the exclusivity promised to tribes under those compacts.

The complaint says Kalshi never obtained tribal licenses and allows wagering by users as young as 18, even though tribal gaming agreements in New Mexico prohibit Class III gaming for anyone under 21.

New Mexico tribes join growing legal fight against Kalshi

The filing describes Kalshi’s offerings as nearly indistinguishable from traditional sportsbooks, pointing to bets involving game winners, point spreads, totals, parlays, and proposition wagers. One example cited in the complaint involved contracts tied to a University of New Mexico Lobos versus New Mexico State Aggies game.

“Bookmakers have been providing the same service as Kalshi since at least the late 1700s,” the complaint states.

The tribes also pointed to Kalshi’s earlier court arguments against the Commodity Futures Trading Commission, when the company reportedly argued that sports-related contracts resembled prohibited gaming activity. According to the filing, Kalshi shifted course after the 2024 election and began including sports contracts tied to the NFL, NBA, NHL, and NCAA beginning in January 2025.

The lawsuit references recent court rulings in Nevada and elsewhere that treated Kalshi’s products as sports gambling. The complaint quotes a federal judge who wrote, “As Justice Potter Stewart famously said about pornography in his concurrence in Jacobellis v. State of Ohio, ‘I know it when I see it.’ These are sports wagers and everyone who sees them knows it,” the Nevada court wrote, according to the filing.

The New Mexico case arrives as tribal governments continue challenging prediction market platforms. The Ho-Chunk Nation previously sued Kalshi in Wisconsin, and California tribes recently lost an appeals court effort seeking intervention in related litigation. Tribal leaders have increasingly argued that prediction markets threaten tribal gaming exclusivity agreements and undermine sovereignty protections negotiated with states.

The complaint also says Kalshi promoted itself through advertising and trademark filings describing its services as “bookmaking services” and “sports betting and gambling tournaments.” According to the tribes, geofencing technology already used throughout the gaming industry could easily block wagering activity on tribal lands.

The tribes claim Kalshi’s daily transaction volume rose from just over $4 million in December 2024 to roughly $800 million daily after sports betting contracts were introduced. It also cites reports valuing the company at about $22 billion following a recent funding round.

The tribes argue Kalshi’s continued operations bypass tribal safeguards involving fraud prevention, integrity monitoring, money laundering controls, and problem gambling protections.

Featured image: Kalshi / Canva

The post New Mexico tribes challenge Kalshi sports contracts over alleged illegal gambling appeared first on ReadWrite.



Turkey and Armenia Ease Trade Barriers in Step Toward Regional Normalisation

0


Turkey has taken a significant step toward normalising relations with Armenia by lifting certain customs restrictions, opening the door for more direct trade between the two countries after more than three decades of closed borders and limited economic contact. The move marks one of the most tangible signs of diplomatic thaw between two neighbours whose […]

The post Turkey and Armenia Ease Trade Barriers in Step Toward Regional Normalisation appeared first on Modern Diplomacy.

The shift toward blockchain-based casinos in the crypto era

0
The shift toward blockchain-based casinos in the crypto era



Over the past few years, the online gambling industry has begun experiencing a gradual but noticeable transformation. While traditional casinos have dominated the digital gaming landscape for decades, a growing segment of players is now exploring alternatives powered by blockchain technology. Among these innovations, Ethereum has emerged as one of the most influential platforms shapingContinue reading “The shift toward blockchain-based casinos in the crypto era”

Google search engine

Recent Posts