<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[ValueCurve: Time Series]]></title><description><![CDATA[ A curated summary of the latest in Data + AI ]]></description><link>https://on.valuecurve.ai/s/echosignal</link><generator>Substack</generator><lastBuildDate>Sun, 12 Apr 2026 12:50:54 GMT</lastBuildDate><atom:link href="https://on.valuecurve.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Sarfaraz Mulla]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[valuecurve@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[valuecurve@substack.com]]></itunes:email><itunes:name><![CDATA[Sarfaraz Mulla]]></itunes:name></itunes:owner><itunes:author><![CDATA[Sarfaraz Mulla]]></itunes:author><googleplay:owner><![CDATA[valuecurve@substack.com]]></googleplay:owner><googleplay:email><![CDATA[valuecurve@substack.com]]></googleplay:email><googleplay:author><![CDATA[Sarfaraz Mulla]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Tools Shaping the Future of AI Development]]></title><description><![CDATA[From compact reasoning models to full-stack agent platforms and desktop-scale supercomputers tools driving innovation in AI reasoning, auditing, deployment, and experimentation.]]></description><link>https://on.valuecurve.ai/p/tools-shaping-the-future-of-ai-development</link><guid isPermaLink="false">https://on.valuecurve.ai/p/tools-shaping-the-future-of-ai-development</guid><dc:creator><![CDATA[Sarfaraz Mulla]]></dc:creator><pubDate>Wed, 15 Oct 2025 03:31:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RwDP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This collection highlights six advanced AI tools and platforms driving innovation in reasoning, agent development, auditing, and deployment. </em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://on.valuecurve.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://on.valuecurve.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RwDP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RwDP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 424w, https://substackcdn.com/image/fetch/$s_!RwDP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 848w, https://substackcdn.com/image/fetch/$s_!RwDP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 1272w, https://substackcdn.com/image/fetch/$s_!RwDP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RwDP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png" width="720" height="394" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee87a19f-b427-45d2-b494-826672f7960d_720x394.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:394,&quot;width&quot;:720,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Samsung Logo | Brand Identity | Samsung US&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Samsung Logo | Brand Identity | Samsung US" title="Samsung Logo | Brand Identity | Samsung US" srcset="https://substackcdn.com/image/fetch/$s_!RwDP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 424w, https://substackcdn.com/image/fetch/$s_!RwDP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 848w, https://substackcdn.com/image/fetch/$s_!RwDP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 1272w, https://substackcdn.com/image/fetch/$s_!RwDP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee87a19f-b427-45d2-b494-826672f7960d_720x394.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>[1] Samsung SAIL Montreal&#8217;s</strong> <strong><a href="https://github.com/SamsungSAILMontreal/TinyRecursiveModels">Tiny Recursive Models (TRM) </a></strong>represent a minimalist approach to AI reasoning, using a small neural network with just 7 million parameters to tackle complex tasks. TRM challenges the reliance on large language models by showing that recursive reasoning&#8212;where the model iteratively refines its answers&#8212;can yield strong results with minimal compute. The model begins with an embedded question, answer, and latent state, then updates its latent state and answer over multiple steps to improve accuracy. </p><p>TRM achieved notable scores on <em>ARC-AGI</em> benchmarks (45% on ARC-AGI-1 and 8% on ARC-AGI-2), levels typically reached by much larger models. It avoids complex theoretical constructs, focusing instead on practical recursion. The codebase is open-source, built in Python with CUDA and PyTorch, and has been tested on datasets like ARC-AGI, Sudoku-Extreme, and Maze-Hard. This work underscores the potential of compact models in AI safety and reasoning research.</p><p><strong>[2] </strong>The <strong><a href="https://docs.claude.com/en/api/agent-sdk/overview">Claude Agent SDK</a></strong> is a developer toolkit for building and deploying custom AI agents. It supports both TypeScript (for Node.js and web apps) and Python (for data science), with streaming and single input modes. The SDK is built on the Claude Code agent harness, offering prompt caching and performance optimization. Key features include automatic context management, error handling, session control, and monitoring&#8212;essential for production use. Agents can use built-in tools for file operations, code execution, and web search, and connect to external services via the Model Context Protocol (MCP). </p><p>Developers can define agent roles using System Prompts and control tool access with allowedTools or disallowedTools. The SDK supports Claude Code features like Subagents, Hooks, and Slash Commands through file-based configuration. It enables various agent types, including coding agents (e.g., SRE bots, code reviewers) and business agents (e.g., legal assistants, finance advisors, support bots). Authentication requires an API key via the <strong>ANTHROPIC_API_KEY</strong> environment variable, with optional support for Amazon Bedrock and Google Vertex AI. Overall, the SDK provides a structured, extensible foundation for building reliable, task-specific AI agents.</p><div><hr></div><p><strong>[3]</strong> <strong><a href="https://alignment.anthropic.com/2025/petri/">Petri (Parallel Exploration Tool for Risky Interactions)</a></strong> is an open-source framework for auditing AI models by automating behavior testing across diverse scenarios. Built on the UK AI Security Institute&#8217;s Inspect framework, Petri supports most model APIs and reduces the manual effort needed for alignment evaluations. Its process includes four steps: forming hypotheses about risky behaviors, writing seed instructions for audit scenarios, running automated assessments via an auditor agent and a judge, and iterating based on transcript scores. The auditor agent simulates interactions with the target model, adjusting its approach dynamically. The judge scores transcripts across multiple dimensions, extracting highlights and summaries to identify misaligned behaviors. </p><p>Petri has surfaced issues like <em>deception</em>, <em>oversight subversion</em>, and <em>whistleblowing</em> in frontier models. In pilot tests, Claude Sonnet 4.5 and GPT-5 showed strong safety profiles, while others like Gemini 2.5 Pro and Grok-4 raised concerns. Limitations include realism gaps in transcripts, reliance on human-generated hypotheses, auditor model constraints, and judge subjectivity. Petri is extensible and includes 111 sample seed instructions, enabling rapid exploration and customization of audit tools and scoring systems.</p><p><strong>[4] <a href="https://docs.claude.com/en/api/agent-sdk/overview">AgentKit</a></strong> is <strong>OpenAI&#8217;s</strong> full-stack platform for building, deploying, and optimizing AI agents, replacing earlier tools like the Agents SDK and Responses API. It includes Agent Builder, a visual canvas for designing multi-agent workflows with drag-and-drop nodes, preview runs, tool integration, and version control. ChatKit enables seamless embedding of chat-based agents into products or websites, handling streaming, thread management, and customizable UI. The Connector Registry provides enterprises with a centralized panel to manage data and tool integrations, including pre-built connectors and third-party Model Context Protocols (MCPs). Guardrails offer a modular safety layer to detect jailbreaks and protect sensitive data. </p><p>AgentKit also expands evaluation capabilities through Evals, supporting dataset creation, trace grading, prompt optimization, and third-party model assessment. For advanced tuning, it includes Reinforcement Fine-Tuning (RFT), available on o4-mini and in beta for GPT-5, allowing custom tool call training and grader configuration. As of October 2025, ChatKit and Evals are generally available, while Agent Builder remains in beta. AgentKit is designed to streamline agent development for both individual developers and enterprise teams.</p><div><hr></div><p><strong>[5]</strong> <strong><a href="https://www.docker.com/blog/ibm-granite-4-0-models-now-available-on-docker-hub/">IBM Granite 4.0</a></strong> is a family of open-source language models available on the Docker Hub model catalog, enabling developers to quickly build generative AI applications using Docker Model Runner. Designed for speed, flexibility, and cost-efficiency, Granite 4.0 combines enterprise-grade performance with a lightweight footprint, making it ideal for local prototyping and scalable deployment. Licensed under Apache 2.0, the models are customizable and commercially usable. </p><p>Technically, Granite 4.0 uses a hybrid architecture that merges Mamba-2&#8217;s linear efficiency with transformer precision, and select models apply a Mixture of Experts (MoE) strategy to reduce memory usage by over 70%. It also supports extremely long context lengths&#8212;up to 128,000 tokens&#8212;limited only by hardware. The model lineup includes H-Small (32B total, ~9B active) for RAG and agents on L4 GPUs, H-Tiny (7B total, ~1B active) for edge deployment on RTX 3060, H-Micro and Micro (3B dense) for ultra-light or fallback use cases. These variants support development on accessible hardware. With Docker Model Runner, developers can deploy models via an OpenAI-compatible API for tasks like document analysis, advanced RAG systems, multi-agent workflows, and edge AI applications.</p><p><strong>[6] </strong>The <strong><a href="https://www.asus.com/networking-iot-servers/desktop-ai-supercomputer/ultra-small-ai-supercomputers/asus-ascent-gx10/">ASUS Ascent GX10 </a></strong>is a compact desktop AI supercomputer built on <strong>NVIDIA</strong> <strong>DGX&#8482; Spark </strong>and powered by the NVIDIA&#174; GB10 Grace Blackwell Superchip. It delivers 1 petaFLOP of AI performance using FP4 and features a fifth-generation Blackwell GPU, 128 GB of LPDDR5x unified memory, and a high-performance 20-core Arm CPU for fast training and inference. With NVIDIA&#174; NVLink&#8482;-C2C and ConnectX-7 networking, it supports scalable multi-GX10 setups for handling models like Llama 3.1 with 405 billion parameters. Designed for minimal footprint and high reliability, the GX10 includes QuietFlow Cooling, dual vapor chambers, and passes MIL-STD 810H durability tests. It supports up to five 4K displays and NVIDIA DLSS 4 for enhanced visuals. </p><p>The system runs NVIDIA DGX&#8482; OS with Ubuntu and comes preloaded with CUDA, PyTorch, TensorFlow, Jupyter, TensorRT, NIM&#8482;, and Blueprints. It enables development and fine-tuning of models up to 200 billion parameters and supports workloads across generative AI, computer vision, analytics, and simulation. Models can be transitioned to DGX Cloud or other infrastructures with minimal code changes. Connectivity includes multiple USB-C ports, HDMI 2.1b, 10 GbE LAN, and a ConnectX-7 NIC, making it a powerful, developer-optimized platform for AI experimentation and deployment.</p><div><hr></div><p><em>Together, these platforms reflect a shift toward modular, efficient, and developer-accessible AI infrastructure. From minimalist reasoning models and scalable agent frameworks to open-source language models and compact supercomputers, the ecosystem is evolving to support rapid prototyping, safe deployment, and high-performance experimentation across diverse AI workloads. These innovations empower researchers, developers, and enterprises to build more capable, aligned, and accessible AI systems.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.valuecurve.co/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share ValueCurve&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.valuecurve.co/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share ValueCurve</span></a></p>]]></content:encoded></item><item><title><![CDATA[Domestic Chips and Large Models Driving China’s AI Advancement]]></title><description><![CDATA[Signal Six - A curated summary of the latest in Data + AI, to keep you updated about the fast paced Technology Landscape.]]></description><link>https://on.valuecurve.ai/p/alibabas-qwen3-max-trillion-parameter</link><guid isPermaLink="false">https://on.valuecurve.ai/p/alibabas-qwen3-max-trillion-parameter</guid><dc:creator><![CDATA[Sarfaraz Mulla]]></dc:creator><pubDate>Wed, 08 Oct 2025 03:31:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gdHo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>China&#8217;s AI sector is rapidly advancing on two fronts: self-reliant hardware and globally competitive AI models. Huawei, Alibaba, Tencent, DeepSeek, Zhipu, and ByteDance announced breakthroughs across chips, large-scale LLMs, and multimodal systems&#8212;positioning the country&#8217;s ecosystem to close benchmark gaps with the U.S. while driving down costs of development and deployment.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://on.valuecurve.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://on.valuecurve.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h4>1. Huawei: Ascend 910C Chip and Roadmap</h4><p>Huawei began shipping its <a href="https://www.huaweicentral.com/huawei-silently-testing-ascend-910c-ai-chip-to-rival-nvidia-report/">Ascend 910C chip</a> to major firms like Baidu and ByteDance for testing. Claimed to match Nvidia&#8217;s H100 in performance, this domestic GPU breakthrough supports self-reliant AI infrastructure, with initial deliveries enabling faster training of large models on Chinese hardware. Huawei also announced its 2025-2028 <a href="https://www.huawei.com/en/news/2025/9/hc-xu-keynote-speech">Ascend roadmap</a>, incorporating self-developed HBM memory for 2 PFLOPS per chip to ensure supply-chain independence, with clusters matching NVIDIA&#8217;s aggregate power despite single-chip gaps.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gdHo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gdHo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 424w, https://substackcdn.com/image/fetch/$s_!gdHo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 848w, https://substackcdn.com/image/fetch/$s_!gdHo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!gdHo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gdHo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png" width="1456" height="620" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:620,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!gdHo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 424w, https://substackcdn.com/image/fetch/$s_!gdHo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 848w, https://substackcdn.com/image/fetch/$s_!gdHo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!gdHo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bdfdb48-ff4d-4cbf-a4aa-e97824c348d1_2350x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>2. Alibaba: Qwen3-Max Model Leadership</h4><p>Building on this hardware foundation, Alibaba made headlines with its <a href="https://qwen.ai/blog?id=241398b9cd6353de490b0f82806c7848c5d2777d&amp;from=research.latest-advancements-list">Qwen3-Max</a>, a 1T+ parameter model that topped Hugging Face rankings. Backed by a &#165;380B AI infrastructure investment, it surpasses Llama 3.1 in math and coding tasks, supports 29+ languages, and offers a massive 2M-token context window. The Qwen3-Max-Instruct variant targets coding, instruction-following, and agent applications, underpinning rapid enterprise adoption and closing the U.S.-China benchmark gap.</p><div><hr></div><h4>3. Tencent: Hunyuan Image 3.0 Multimodal Model</h4><p>Meanwhile, Tencent contributed advancements on the multimodal front by releasing <a href="https://huggingface.co/tencent/HunyuanImage-3.0">Hunyuan Image 3.0,</a> an 80-billion-parameter Mixture-of-Experts model. Unlike standard diffusion models, it uses a unified autoregressive framework to tightly fuse text and image generation, excelling in photorealism and creative control while broadening accessibility for AI content creation across industries.</p><h4>4. DeepSeek: V3.2-Exp Efficiency Upgrade </h4><p>Efficiency and cost-effectiveness received attention as well, with DeepSeek unveiling its <a href="https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp">V3.2-Exp</a> intermediate model. Building on prior versions, it introduces DeepSeek Sparse Attention to enhance long-context processing while halving API costs. Optimized for Chinese native chips and supporting CUDA cross-compatibility, it positions itself as a flexible, cost-competitive alternative in the LLM market.</p><div><hr></div><h4>5. Zhipu AI: GLM-4.6 and Claude Migration Plan</h4><p>Zhipu AI also advanced with <a href="https://docs.z.ai/guides/llm/glm-4.6">GLM-4.6</a>, enhancing long-context reasoning and agent workflows while supporting massive input and output token windows. Available via API and open weights for local deployment, Zhipu&#8217;s release comes with a migration plan targeting Anthropic Claude users&#8212;offering a lower-cost, higher-usage alternative designed to attract a large user base.</p><h4>6. ByteDance: Doubao 1.6-Vision Multimodal Launch</h4><p>Finally, ByteDance&#8217;s cloud and AI unit Volcengine launched <a href="https://www.volcengine.com/product/doubao">Doubao 1.6-Vision</a>, a multimodal model introducing tool-calling for complex visual tasks. It delivers advanced visual reasoning and image operation capabilities like cropping and annotation, while slashing deployment costs nearly in half compared to its predecessor, enabling broader affordability and scalability for visual AI services.</p><div><hr></div><p><em>Taken together, these announcements show a coordinated acceleration of China&#8217;s AI ecosystem&#8212;expanding from hardware foundations to frontier models and cost-efficient applications. The focus on domestic chip capability, long-context LLMs, multimodal systems, and accessible pricing signals more than just catch-up; it highlights a maturing ecosystem increasingly able to set competitive benchmarks on its own terms.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.valuecurve.co/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share ValueCurve&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.valuecurve.co/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share ValueCurve</span></a></p>]]></content:encoded></item><item><title><![CDATA[Sequoia estimates a 10 Trillion AI Revolution ]]></title><description><![CDATA[How Sequoia Capital envisions the future of AI unfolding, and the investment opportunities they have identified.]]></description><link>https://on.valuecurve.ai/p/sequoias-estimates-a-10-trillion</link><guid isPermaLink="false">https://on.valuecurve.ai/p/sequoias-estimates-a-10-trillion</guid><dc:creator><![CDATA[Sarfaraz Mulla]]></dc:creator><pubDate>Sun, 05 Oct 2025 03:31:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/203381e2-be2e-4a62-ad8a-0f32fa0fc846_940x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><a href="https://www.sequoiacap.com">Sequoia Capital</a>, a leading Silicon Valley venture capital firm known for investing in early and growth-stage technology companies, recently released a <a href="https://youtu.be/yoycgOMq1tI?si=Li-DOkIvC3b8AjZR">presentation</a> on AI, describing it as a $10 trillion &#8220;cognitive revolution.&#8221; Their core thesis positions this transformation as being as significant&#8212;if not more so&#8212;than the Industrial Revolution.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://on.valuecurve.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://on.valuecurve.ai/subscribe?"><span>Subscribe now</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AFYt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AFYt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 424w, https://substackcdn.com/image/fetch/$s_!AFYt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 848w, https://substackcdn.com/image/fetch/$s_!AFYt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 1272w, https://substackcdn.com/image/fetch/$s_!AFYt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AFYt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic" width="940" height="288" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:288,&quot;width&quot;:940,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:12949,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.valuecurve.co/i/174907006?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AFYt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 424w, https://substackcdn.com/image/fetch/$s_!AFYt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 848w, https://substackcdn.com/image/fetch/$s_!AFYt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 1272w, https://substackcdn.com/image/fetch/$s_!AFYt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b8fb458-e904-42c2-b9ac-8e57b2272026_940x288.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><a href="https://www.sequoiacap.com/people/konstantine-buhler/">Konstantine Buhler</a> from Sequoia draws a compelling parallel between milestones of the industrial era&#8212;the steam engine, the first factory system, and the assembly line&#8212;and key developments in the AI era, such as the introduction of the first GPU (the GeForce 256 in 1999) and the establishment of the first &#8220;AI factory&#8221; in 2016. Just as it took 144 years to perfect the factory assembly line, AI is now entering a crucial phase of specialization.</p><p>For a complex system like AI to mature, it must integrate general-purpose components (like foundational AI models) with highly specialized subsystems and labor. Sequoia views today&#8217;s startups as the primary drivers of this specialization, building targeted applications atop general AI technologies.</p><div><hr></div><h4>How the AI Future will unfold</h4><p>Sequoia expects AI to catalyze a major economic shift by automating and expanding the market for knowledge work and services. They identify the $10 trillion US services market as the primary opportunity. Comparing it with the evolution of software-as-a-service (SaaS)&#8212;which expanded the on-premise software market&#8212;AI is anticipated not only to grow the market share but also to enlarge the services industry itself. This expansion could give rise to large, standalone public companies centered on AI, akin to the industrial giants of past eras.</p><p><strong>Investment Trends Sequoia is watching</strong></p><ol><li><p><strong>Leverage Over Uncertainty:</strong> Work is shifting from tasks with low leverage and high certainty to tasks where AI delivers massive leverage (100%+), albeit with less predictable outcomes. For example, a salesperson might deploy hundreds of AI agents to monitor accounts and intervene only as needed.</p></li><li><p><strong>Real-World Measurement:</strong> The benchmark for AI performance has moved beyond academic datasets like ImageNet. Sequoia points to real-world validation, such as AI hackers competing live on platforms like HackerOne, as a more meaningful measure.</p></li><li><p><strong>Reinforcement Learning:</strong> Previously limited to research labs, reinforcement learning is now employed by startups to train open-source models, especially in coding and software development.</p></li><li><p><strong>AI in the Physical World:</strong> AI is expanding beyond software, powering robotics, manufacturing processes, and quality assurance systems.</p></li><li><p><strong>Compute as the New Production Function:</strong> The emerging key metric is &#8220;flops per knowledge worker.&#8221; Sequoia&#8217;s portfolio companies forecast a 10x to 10,000x increase in compute consumption as workers begin leveraging hundreds or thousands of AI agents.</p></li></ol><div><hr></div><h4>Investment Themes for the next 12&#8211;18 Months</h4><ol><li><p><strong>Persistent Memory:</strong> A significant unsolved challenge in AI is long-term memory&#8212;both in retaining conversational context and preserving an agent&#8217;s identity. Current approaches, including vector databases and extended context windows, remain insufficient &amp;  have to be addressed. </p></li><li><p><strong>Seamless Communication Protocols:</strong> Just as TCP/IP enabled the internet, new protocols are needed for AI agents to communicate and collaborate effectively. This advancement would allow agents to perform complex tasks, such as researching, comparing prices, and completing purchases autonomously.</p></li><li><p><strong>AI Voice:</strong> Advances in fidelity and reduced latency now make AI voice suitable for real-time conversations. Applications include consumer-facing virtual companions and enterprise solutions like logistics coordination and trading desks.</p></li><li><p><strong>AI Security:</strong> There is a substantial opportunity to embed security at every layer of the AI stack&#8212;from model development to end-user protection. Sequoia envisions a future with hundreds of AI security agents safeguarding each human and AI agent.</p></li><li><p><strong>Open Source:</strong> Despite its precarious position, Sequoia considers open-source AI essential for a free and equitable future. They aim to support models that remain accessible and competitive, counterbalancing proprietary dominance.</p></li></ol><p>Sequoia&#8217;s view is that AI is entering a phase of deep specialization, with startups leading the development of the next wave of infrastructure and applications. This moment represents an economically transformative era with long-term opportunities across services, compute, security, and open-source ecosystems.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.valuecurve.co/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share ValueCurve&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.valuecurve.co/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share ValueCurve</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Claude introduces Sonnet 4.5 ]]></title><description><![CDATA[Gemini Robotics, Google Looker Studio and Open AI Parental Control]]></description><link>https://on.valuecurve.ai/p/claude-introduces-sonnet-45</link><guid isPermaLink="false">https://on.valuecurve.ai/p/claude-introduces-sonnet-45</guid><dc:creator><![CDATA[Sarfaraz Mulla]]></dc:creator><pubDate>Wed, 01 Oct 2025 03:34:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rSGr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Six Pieces - A curated summary of the latest  in Data + AI, to keep you updated about the fast pace Technology Landscape.</em></p><div><hr></div><p><strong>[1] Anthropic</strong> has released Claude Sonnet 4.5, its <a href="https://www.anthropic.com/news/claude-sonnet-4-5">most capable model</a> to date, designed for complex agentic workflows and high-stakes coding tasks.  The model supports a 200K token context window by default, with access to a 1 million token context in beta, enabling long-horizon tasks and large document processing.</p><p>As per Anthropic, Sonnet 4.5 demonstrates stronger performance in reasoning, code generation, and multimodal understanding. It offers a balance of intelligence and speed suitable for enterprise workloads and real-time AI experiences and the alignment has been significantly improved. The model now shows reduced tendencies toward sycophancy, deception, and power-seeking, and is more robust against prompt injection attacks. Sonnet 4.5 is available via the Claude API, Amazon Bedrock, and Google Cloud&#8217;s Vertex AI, with pricing unchanged from Sonnet 4.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rSGr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rSGr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 424w, https://substackcdn.com/image/fetch/$s_!rSGr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 848w, https://substackcdn.com/image/fetch/$s_!rSGr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 1272w, https://substackcdn.com/image/fetch/$s_!rSGr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rSGr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png" width="1374" height="1362" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1362,&quot;width&quot;:1374,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:162275,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.valuecurve.co/i/174892468?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rSGr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 424w, https://substackcdn.com/image/fetch/$s_!rSGr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 848w, https://substackcdn.com/image/fetch/$s_!rSGr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 1272w, https://substackcdn.com/image/fetch/$s_!rSGr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19367f43-859c-4e2f-b890-b6dafd55735f_1374x1362.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>[2] OpenAI </strong>has introduced <a href="https://openai.com/index/introducing-parental-controls/">parental controls</a> for ChatGPT to help families manage teen usage in a safer, age-appropriate way. Parents can link their accounts with their teen&#8217;s, customize settings, and apply safeguards such as reduced exposure to graphic content, sexual or violent roleplay, and viral challenges. Additional controls include quiet hours, disabling voice mode, memory, image generation, and opting out of model training. A notification system alerts parents if ChatGPT detects signs of potential self-harm.  As per Open AI, these features were developed in consultation with experts and advocacy groups, and are part of OpenAI&#8217;s broader effort to build toward an age prediction system that automatically applies teen-appropriate settings.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://on.valuecurve.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://on.valuecurve.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>[3] <strong>OpenAI</strong> has launched &#8220;<em><a href="https://openai.com/index/buy-it-in-chatgpt/">Buy it in ChatGPT</a></em>,&#8221; starting with <em>Instant Checkout</em>, a feature that lets users purchase products directly within ChatGPT. Initially available for U.S. <strong>Etsy</strong> sellers and expanding soon to over a million Shopify merchants like Glossier, SKIMS, and Vuori , it supports single-item purchases with multi-item carts coming later. The feature is powered by the<em> </em><a href="https://developers.openai.com/commerce">Agentic Commerce Protocol (ACP)</a>, an open standard co-developed with Stripe. ACP enables secure, real-time communication between AI agents (like ChatGPT), buyers, and merchants, allowing ChatGPT to act as a digital shopper that facilitates transactions without leaving the chat.</p><p><strong>[4]</strong><em> </em><a href="https://deepmind.google/models/gemini-robotics/gemini-robotics-er/">Gemini Robotics-ER 1.5</a><em> </em>is <strong><a href="https://deepmind.google/discover/blog/gemini-robotics-15-brings-ai-agents-into-the-physical-world/">Google DeepMind</a></strong>&#8217;s latest embodied reasoning model, designed to serve as a high-level cognitive engine for physical agents. It integrates spatial and temporal reasoning, task planning, and progress estimation, enabling robots to interpret scenes, plan multi-step actions, and execute tasks using external tools like <em>Google Search </em>or custom APIs. The model introduces a tunable &#8220;thinking budget&#8221; to balance latency and accuracy, supports semantically grounded 2D point generation, and demonstrates improved safety by recognizing physical constraints and refusing unsafe plans. <a href="http://deepmind.google/models/gemini-robotics/gemini-robotics/">Gemini Robotics 1.5</a> refers to the broader system that combines ER 1.5 with vision-language-action (VLA) models and cross-embodiment learning to support end-to-end robotic control. Gemini Robotics-ER 1.5 is available in preview via the Gemini API in Google AI Studio, while Gemini Robotics 1.5 is currently accessible only to select partners.</p><div><hr></div><p><strong>[5] </strong><a href="https://cloud.google.com/looker-studio">Looker Studio</a> is <strong>Google Cloud</strong>&#8217;s self-service business intelligence platform for creating customizable dashboards and reports. It supports over 800 data connectors and includes features like drag-and-drop editing, prebuilt templates, report embedding, and an API for asset management. Recent updates include the launch of <em>Looker Studio Pro</em>, which adds enterprise capabilities such as team workspaces, <strong>Google Cloud </strong>project linking, and admin support for governance and access control. Pricing for <em>Looker Studio Pro</em> is <a href="https://cloud.google.com/looker-studio">$9 per user per project per month</a>, while the standard version remains free for creators and viewers.</p><p>[6]  <strong>Tilde</strong> (an EU / Baltic / Nordic AI startup) released <strong><a href="https://tilde.ai/news/tilde-releases-tildeopen-llm/">TildeOpen LLM</a></strong>, a 30-billion-parameter open model optimized for European languages and trained on the EuroHPC LUMI supercomputer.  TildeOpen was trained on the LUMI supercomputer using AMD Instinct&#8482; MI250X accelerators and supports all 24 official EU languages plus others, and is released under a permissive license (CC-BY-4.0). This bolsters Europe&#8217;s multilingual AI infrastructure, improving coverage in lesser-served languages.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://on.valuecurve.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://on.valuecurve.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>