A tricky geometry puzzle shows just how much leading LLMs have progressed in little over a year. What took me nearly 2 hours of back and forth using GPT-4o last year, Sonnet 4.5 solves in less than 10 seconds today. I detail everything in my latest article on the Towards Data Science platform. It's completely free to read. Check it out using the link below and see if you can solve the puzzle before looking at the answer! https://lnkd.in/eKgbTrzb
How GPT-4o and Sonnet 4.5 improved in a year: A geometry puzzle
More Relevant Posts
-
Portfolio Update: What I’ve Been Working On. Over the past months, I’ve immersed myself in Machine Learning, Data Science, and Computer Engineering projects. Here are the highlights: 1. Book Recommendation System Developed a content-based recommendation engine using Flask, TF-IDF, and cosine similarity, complemented by dynamic dashboards for data visualization and user insights. 2. Network Fault Node Detection Engineered a comprehensive ML pipeline: benchmarked multiple models, selected the top performers, fine-tuned them individually, and integrated them into an ensemble for enhanced accuracy and robustness. 3. Global CO₂ & Energy Trends Analysis Processed and analyzed environmental data, created interactive dashboards in Power BI, and implemented time-series forecasting with Prophet to uncover trends and future projections. 4. IMDB Sentiment Analysis Conducted a comparative study of classical NLP models versus deep learning architectures (RNN, LSTM, GRU) for movie review classification, highlighting strengths and trade-offs of each approach. Explore these projects in detail and see my portfolio here: https://lnkd.in/dbx9wq69
To view or add a comment, sign in
-
For anyone looking to upskill in the AI/ML space, consistent, hands-on practice is essential for mastering the fundamentals. I discovered a fantastic website for this on X a few days ago called Deep-ML.com. After using it for a couple of days, I've found it very much valuable. Think of it as a LeetCode specifically for Machine Learning and Data Science. It offers a curated collection of problems perfect for building and testing your skills across a wide range of categories. As a free tool for interview prep or simply honing your craft, it's excellent. I'm already finding it helpful for strengthening my own basics and highly recommend checking it out and hope it adds value to you
To view or add a comment, sign in
-
For anyone building with LLMs and graph databases, managing context is a significant challenge. This diagram from Towards Data Science effectively visualizes a solution: safeguarding Neo4j-MCP-powered agents. By implementing timeout guards, result sanitization, and token truncation, we can ensure the LLM receives controlled and relevant outputs, preventing overload and improving reliability. This is a practical approach to building more robust and scalable AI systems. Dive deeper into the methodology here: https://lnkd.in/gG4p6stF #ArtificialIntelligence #LLM #Neo4j #GraphDatabases #AIArchitecture
To view or add a comment, sign in
-
🌸 PySpark ML: Classifying Iris Flowers with Logistic Regression 🌸 Ever wondered how to bring your Pandas-based experiments into the world of distributed machine learning? I just built a clean and simple PySpark ML pipeline using the classic Iris dataset — and it works like magic! ⚡ Here’s what the project does 👇 ✅ Loads the Iris dataset using sklearn.datasets ✅ Converts it into a Spark DataFrame ✅ Uses VectorAssembler to create feature vectors ✅ Trains a Logistic Regression model with PySpark ML ✅ Evaluates performance with accuracy metrics 💡 This small project shows how data scientists and data engineers can collaborate smoothly — Pandas for quick experiments, PySpark ML for scalable training! What I love most? You can start on your laptop and scale the same pipeline to a Spark cluster without changing your ML logic. 🚀 🔍 Accuracy achieved: ~89% — not bad for a few lines of code! I’m curious — 👉 What’s your go-to dataset when testing new ML pipelines? Drop your favorite one in the comments! 👇 #PySpark #MachineLearning #BigData #AI #DataEngineering #LogisticRegression #BigDatapedia #Pandas #SparkML
To view or add a comment, sign in
-
Ever wondered how Netflix suggests shows you might like? Or how your email client filters spam? The answer often lies in a beautifully simple Machine Learning algorithm called K-Nearest Neighbors (KNN). It works on a principle we use every day: "Tell me who your friends are, and I'll tell you who you are." In data terms, it classifies a new point based on what its closest neighbors are like. Here’s why understanding KNN is crucial for any aspiring Data Scientist or ML Engineer: ✨ It's the masterclass in the Bias-Variance Tradeoff. Choosing 'K' isn't arbitrary—it's the perfect illustration of balancing underfitting and overfitting. ✨ It teaches fundamental ML concepts: Lazy learning, distance metrics, and the infamous "Curse of Dimensionality." ✨ It's deceptively simple. While easy to understand, using it effectively requires careful preprocessing (like feature scaling) and parameter tuning. My latest article doesn't just explain what KNN is; it provides a practical guide on when and how to use it,including how to avoid its common pitfalls:
To view or add a comment, sign in
-
Data Science is the art of transforming raw data into actionable intelligence. By combining statistics, machine learning, and visualization, data science empowers organizations to uncover patterns, predict trends, and make evidence-based decisions. From product recommendations to fraud detection, it fuels modern business strategies. With tools like Python, R, SQL, and Power BI, data scientists convert complex datasets into meaningful insights. The future lies in automating pipelines, enhancing interpretability, and ensuring ethical data use. As industries embrace digital transformation, data science remains the backbone of innovation, efficiency, and competitive advantage. #DataScience #Analytics #MachineLearning #BigData #AI #DataDriven
To view or add a comment, sign in
-
-
Are you familiar with broadcast join in Apache Spark? It’s when Spark ships a whole dataset to every executor to speed up joining two tables. It’s great for small tables, but a performance time bomb if the data is huge. And in real life, data does get huge. Check out what happened when a broadcast almost broke a customer’s cluster and how DataFlint helped solve it! #data #performance #AI #apachespark
To view or add a comment, sign in
-
Vision LLMs excel at processing complex information in documents. In my latest article, I cover how to use new Vision Language models, such as Qwen3-VL, discussing: - Why we need vision LLMs - Specific tasks for vision LLMs (like OCR and information extraction) - The downsides and limitations of vision LLMs Check out the full article on Towards Data Science: https://lnkd.in/dGnKtxC3
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development