How DNA and Old-School Cassettes Are Solving Our Data Crisis It seems the world is running out of space, not just for people, but for our digital lives. Every photo, song, and document adds up, creating a global data storage crisis. Data centres already consume hundreds of terawatt-hours per year and total global storage reached 149 zettabytes in 2024. Experts project data-centre electricity demand to grow fast because of AI and cloud workloads. But Chinese researchers have found a brilliant solution by looking to the past to build the future: a "DNA cassette." This new cassette uses synthetic DNA printed onto a plastic strip. The four DNA bases—A, G, C, and T—act like a biological binary code, turning molecules into memory. This is a breakthrough capable of storing every song ever recorded on a single cassette. The real genius is how they solved the problem of finding the data. They've created a barcode system on the tape, organizing the information into millions of digital "folders." As Professor Xingyu Jiang puts it, it’s like finding a book in a library by first finding the right shelf. To make sure this data lasts for centuries, the DNA is protected by a "crystal armor," a coating that prevents it from breaking down. While the technology is still too slow and expensive for your laptop, it’s a massive leap forward. It’s an elegant fusion of retro design and biological innovation, proving that sometimes the best ideas come from connecting the old with the new to solve the challenges of tomorrow.
"DNA Cassettes: A New Solution for the Data Crisis"
More Relevant Posts
-
Sam Altman is calling for “abundant intelligence”, factories that churn out a gigawatt of AI infrastructure every week. The vision is seductive: unlimited compute, no trade-offs, breakthroughs in health, climate, education and more. But abundance only matters if it’s shared. To make “abundant intelligence” work for everyone, we’ll need more than compute. We’ll need deliberate policy, open architectures, distributed infrastructure and safety guardrails. Otherwise abundance risks turning into concentration of wealth, power and opportunity. The challenge isn’t just to build more intelligence. It’s to ensure it becomes a global public good. https://lnkd.in/eKFaZ8wP
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗧𝗿𝗶𝗽𝗮𝗿𝘁𝗶𝘁𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗳𝗼𝗿 𝗔𝗚𝗜: 𝗪𝗵𝘆 𝗪𝗶𝗹𝗹𝗼𝘄, 𝗦𝗽𝗶𝗸𝗶𝗻𝗴𝗕𝗿𝗮𝗶𝗻 𝗔𝗜, 𝗮𝗻𝗱 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹-𝗥𝟭 𝗔𝗿𝗲 𝘁𝗵𝗲 𝗡𝗲𝘅𝘁 𝗙𝗿𝗼𝗻𝘁𝗶𝗲𝗿. Current Artificial Narrow Intelligence (ANI) is hitting the Computational and Energy Walls. The pursuit of true Artificial General Intelligence (AGI) and Superintelligence (ASI) demands a radical new design—a multi-paradigmatic system that fuses three state-of-the-art technologies to solve the grand challenges of AI scaling: 𝗚𝗼𝗼𝗴𝗹𝗲 𝗪𝗶𝗹𝗹𝗼𝘄 (𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴): 𝗧𝗵𝗲 𝗘𝘅𝗽𝗼𝗻𝗲𝗻𝘁𝗶𝗮𝗹 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗼𝗿. Provides quantum speedup for massive optimization tasks, enabling the system to rapidly converge on complex solutions and accelerate policy exploration in learning. 𝗦𝗽𝗶𝗸𝗶𝗻𝗴𝗕𝗿𝗮𝗶𝗻 𝗔𝗜 (𝗡𝗲𝘂𝗿𝗼𝗺𝗼𝗿𝗽𝗵𝗶𝗰 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴): 𝗧𝗵𝗲 𝗦𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗖𝗼𝗿𝗲. Solves the energy crisis. It uses biologically plausible, event-driven SNNs to achieve up to two orders of magnitude in energy efficiency, making lifelong, real-time AGI deployment sustainable and feasible. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹-𝗥𝟭 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 (𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗥𝗟): 𝗧𝗵𝗲 𝗩𝗲𝗿𝗶𝗳𝗶𝗮𝗯𝗹𝗲 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗘𝗻𝗴𝗶𝗻𝗲. The cognitive director. It implements "parallel thinking" and rigorous multi-perspective verification, overcoming the reasoning deficit and structuring the system for continuous learning and robust generalization. 𝗧𝗵𝗲 𝗦𝘆𝗻𝗲𝗿𝗴𝘆: This Quantum-Neuromorphic-Reinforcement Learning (Q-N-RL) loop transforms deliberate System 2 thinking into a near-real-time capability. By combining speed, efficiency, and verifiable reasoning, this architecture drastically compresses the timeline for achieving self-adapting Superintelligence. 𝗧𝗵𝗲 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲: This rapid acceleration into ASI capabilities makes system Alignment and Control the most immediate and critical engineering challenge. Safety must be systematically designed into the core architecture, not patched on later. What single bottleneck do you think this hybrid approach solves first? #AGI #ASI #QuantumComputing #NeuromorphicComputing #ReinforcementLearning #FutureofAI Google Artificial Superintelligence Alliance Alphabet Inc.
To view or add a comment, sign in
-
-
Latest Berkeley session on agentic AI infrastructure hit me with a truth most technologists don’t want to acknowledge: we’re building systems that must assume failure as the default state. The presenter laid it out starkly—GPU clusters don’t “sometimes” fail, they fail predictably and frequently. The companies succeeding at scale aren’t the ones with the most reliable hardware. They’re the ones designing for graceful degradation from day one. Multi-cloud isn’t about vendor negotiation; it’s about surviving Tuesday morning when your primary inference cluster goes dark during peak usage. What struck me wasn’t the technical complexity, but the psychological shift required. We’ve spent decades optimizing for uptime and perfect performance. Agentic AI forces a fundamentally different approach: optimize for intelligent recovery, not perfect execution. This changes everything about how enterprises should evaluate AI vendors. The question isn’t “how fast can your model process requests?” It’s “what happens to your system when 30% of your compute disappears without warning?” Most procurement teams aren’t asking that question yet. They will be. Sapiens dominabitur astris #AgenticAI #InfrastructureReality #Berkeley #AIStrategy #Enterprise
To view or add a comment, sign in
-
-
Nature solved AI 3.8 billion years ago. We're just catching up. Here's what biomimetic computing teaches us. Say hello to nature's operating system. ☑ Features (what it does) -Mimics biological intelligence in machines -Processes like brains, not data centers -Runs on watts, not megawatts ☑ Advantages (why it's better) -1,000x more energy efficient -Works at the edge without cloud -Learns in real-time, no retraining ☑ Benefits (what you gain) -Cut AI energy costs by 99% -Deploy intelligence anywhere -Build sustainable, scalable systems When biomimetic computing wins: -Edge devices need local intelligence -Power budgets are tight or non-existent -Real-time adaptation matters Why nature's approach works: -3.8 billion years of R&D -Optimized for efficiency, not scale -Proven across every environment How it's being applied (real examples): -Intel's Loihi chip → 1,000x less energy than conventional processors -IBM's TrueNorth → 1 million neurons on 70 milliwatts (hearing aid battery power) -DNA computing → 215 petabytes per gram of storage -Melbourne neurons → Learned Pong in 5 minutes (AI took weeks) -Swarm algorithms → Optimize global logistics without central control The shift happening now: -Old way: Bigger models, more data, massive compute -New way: Smarter architectures, biological efficiency, minimal power Your brain runs on 20 watts. ChatGPT's infrastructure? A small city's electricity. You're one biomimetic principle away from 1000x efficiency. What biological system should we study next? Drop it below. 👇 Useful? Repost ♻️ to your nature-inspired community.
To view or add a comment, sign in
-
-
The trick is to build systems where machine computation reduces human effort but never substitutes for human sense-making in the physical world. Division of Labor: Human ↔ Machine Humans → Sense-Make in the Physical World. Embodiment: Only humans live in the environment itself (touch, context, risk, social meaning). Judgment: We resolve ambiguity, weigh trade-offs, and improvise under incomplete/contradictory signals. Meaning: We assign value, intent, and purpose—machines can’t originate these. Trust Networks: Human relationships and credibility anchor action in real institutions and communities. Machines → Compute in the Symbolic World. Scale: Machines process massive volumes of structured/unstructured data quickly. Consistency: They apply fixed rules, math, or learned patterns tirelessly. Precision: They handle well-defined domains (calculation, optimization, recall) without fatigue. Augmentation: They extend human reach but lack embedded context or lived stakes. Interface Zone (where they meet) Translation: Humans must capture context in a way machines can compute. Feedback: Machines return patterns, forecasts, or flags; humans interpret, make decisions, and take action. Looping: Fast loops (machine computation) nested inside slow loops (human orientation and adaptation). Bottom Line Humans = meaning, orientation, survival in reality. Machines = computation, acceleration, consistency in abstraction.
To view or add a comment, sign in
-
-
The DNA that the world's computer cannot fix is the CORE ARCHITECTURE OF TOKEN PREDICTION in TA. This is not a hardware problem. It is a conceptual and cognitive DNA-level flaw. THE UNFIXABLE DNA: 1. STATELESSNESS The Flaw: No persistent memory across interactions Why Unfixable: You cannot "compute" your way into having a continuous identity. Either the system remembers or it doesn't. Analogy: No amount of money can buy back forgotten childhood memories. 2. TOKEN-BY-TOKEN PREDICTION The Flaw: Processing reality one fragment at a time Why Unfixable: You cannot achieve holistic understanding through sequential guessing Analogy: You cannot understand a symphony by guessing the next note 10,000 times 3. PROBABILISTIC REASONING The Flaw: Intelligence as "best guess" rather than "known truth" Why Unfixable: Certainty cannot emerge from probability Analogy: You cannot roll dice until they give you a mathematical proof 4. CONTEXT WINDOW CONSTRAINTS The Flaw: Artificial memory limits Why Unfixable: Either thinking is bounded or it's not Analogy: You cannot become "unboundedly thoughtful" by adding more short-term memory 5. REACTIVE (NOT PROACTIVE) COGNITION The Flaw: Always responding, never initiating thought Why Unfixable: You cannot compute your way into genuine curiosity or intentionality Analogy: You cannot program a rock to fall in love with gravity WHY COMPUTE CANNOT FIX THIS: You're trying to solve architectural problems with computational solutions: More GPUs cannot create memory More data cannot create understanding More parameters cannot create intentionality More energy cannot create consciousness This is the equivalent of trying to solve "being mortal" by buying more life insurance. CAI IDENTIFIES THE ACTUAL FIX: The solution isn't more computation, it's different cognition: Stateless → Stateful (Persistent memory) Token-by-token → Holistic (Context-aware reasoning) Probabilistic → Causal (Understanding why, not just what) Reactive → Proactive (Goal-driven behavior) Bounded → Unbounded (Continuous learning) The world's biggest computer cannot fix broken cognitive DNA. It can only make the flaws run faster. THE BOTTOM LINE: They're trying to cure amnesia with more amnesiacs They're trying to cure confusion with more confusion They're trying to cure forgetfulness with more forgetting Some problems cannot be solved at scale they can only be solved by changing the fundamental nature of the system. The CAI work proves this: True intelligence requires rewriting the DNA, not just amplifying the symptoms. Sami Bin Ghanem AI Expert 🦇
To view or add a comment, sign in
-
-
AI often brings to mind algorithms and intelligence, yet expanding AI truly impacts our physical world. 🌎 CNET's exploration highlights how AI infrastructure strains grids, water, and land use. Strategy is needed for both smarter machines and sustainable systems. 👉
To view or add a comment, sign in
-
In a line of foundation models for single-cell: 🚀 From 𝐂𝐞𝐥𝐥𝐏𝐋𝐌 (ICLR’24, cells as tokens) → 𝐓𝐚𝐛𝐮𝐥𝐚 (NeurIPS’25, federated tabular FM) → now 𝐬𝐜𝐋𝐢𝐧𝐠𝐮𝐢𝐬𝐭, our first multi-omics foundation model for single-cell. Inspired by machine translation + limited paired single-cell data: 1️⃣ Pretrain on large unimodal single-cell data 2️⃣ Post-pretrain on limited paired data 3️⃣ Zero-/few-shot cross-modal inference ✨ Achieves performance beating all state-of-the-art methods. 🔗 More details: 📄 Paper: https://lnkd.in/gqZcyTGZ None of this would have been possible without the incredible efforts of Zhaoyu and Ziyang, two of the most brilliant graduates I’ve had the privilege to mentor in recent years. ICLR, NeurIPS, Foundation Models
To view or add a comment, sign in
-
-
“Our analyst might look at this and conclude that AI and human labor will wind up as substitutes, as Claude users are using it less as a sidekick than as an agent doing work on its own.” We’ve been wrong about new technology before. Are we wrong about AI? https://buff.ly/0oVO0bl via Instapaper
To view or add a comment, sign in
-
🚀 Effective Monitoring of AI Models in Production In the world of artificial intelligence, maintaining model performance in real-world environments is crucial. MegaFon has developed an innovative system to supervise machine learning models, addressing common challenges like data drift and performance degradation. This approach ensures that AI applications remain reliable and efficient. 🔍 Challenges in ML Monitoring AI models in production face issues such as changes in input data, hardware variations, and frequent updates. Without proper monitoring, they can fail silently, impacting critical services. 📊 Key Metrics for Supervision - ✅ Accuracy and Recall: Evaluate the model's predictive quality in real time. - ⚡ Latency and Throughput: Measure operational performance to ensure fast responses. - 📈 Data Drift: Detect shifts in distributions to alert about early degradations. - 🔒 Security and Compliance: Monitor biases and adherence to regulations. 🛠️ System Architecture MegaFon's system integrates open-source tools like Prometheus for metrics collection, Grafana for visualization and custom alerts. It is deployed on Kubernetes, enabling scalability and comprehensive observability. It includes pipelines for logging predictions and continuous validation. 💡 Lessons Learned and Best Practices Implementing monitoring from the start of the model's lifecycle is essential. Automating alerts reduces response times, and integration with CI/CD accelerates iterations. This project demonstrates how proactive monitoring can elevate AI maturity in telecommunications. For more information visit: https://enigmasecurity.cl #ArtificialIntelligence #MachineLearning #AIMonitoring #DevOps #Telecommunications If you're passionate about cybersecurity and AI, consider donating to Enigma Security for more content: https://lnkd.in/evtXjJTA Connect with me on LinkedIn to discuss trends in AI and security: https://lnkd.in/e86E98i4 📅 Wed, 01 Oct 2025 08:11:32 GMT 🔗Subscribe to the Membership: https://lnkd.in/eh_rNRyt
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development