🪙✨ From Coin to Data: Digital Humanities Meets Computer Vision Numismatics—the study of coins—has long relied on careful, manual examination. But what happens when we bring AI and object detection into the field? In Rafael Cabral, Maria De Iorio, and Andrew Harris's recent open-access article, From Coin to Data: The Impact of Object Detection on Digital Numismatics, we explore how models like CLIP can identify and classify motifs on coins, from intricate Russian depictions of Saint George and the Dragon to ancient Southeast Asian coins bearing Hindu-Buddhist symbols. Insights from the paper 🔍 Large CLIP models excel at detecting complex iconography, while traditional methods work better for simple geometric motifs. ⚖️ A new statistical calibration improves reliability in low-quality datasets. 📊 Automated detection dramatically reduces time and effort in large-scale analysis, paving the way for studies of provenance, trade networks, and even forgery detection. This research highlights how digital humanities and AI can transform cultural heritage studies—making them more scalable, precise, and interdisciplinary. 📖 Read the full paper here (open access): https://lnkd.in/gssUfDkv #DigitalHumanities #AI #ComputerVision #Numismatics #CulturalHeritage
How AI and computer vision transform numismatics
More Relevant Posts
-
Efficient Pyramidal Analysis of Gigapixel Images on a Decentralized Modest Computer Cluster https://lnkd.in/eXYHWnv7 Analyzing gigapixel images is recognized as computationally demanding. In this paper, we introduce PyramidAI, a technique for analyzing gigapixel images with reduced computational cost. The proposed approach adopts a gradual analysis of the image, beginning with lower resolutions and progressively concentrating on regions of interest for detailed examination at higher resolutions. We investigated two strategies for tuning the accuracy-computation performance trade-off when implementing the adaptive resolution selection, validated against the Camelyon16 dataset of biomedical images. Our results demonstrate that PyramidAI substantially decreases the amount of processed data required for analysis by up to 2.65x, while preserving the accuracy in identifying relevant sections on a single computer. To ensure democratization of gigapixel image analysis, we evaluated the potential to use mainstream computers to perform the computation by exploiting the parallelism potential of the approach. Using a simulator, we estimated the best data distribution and load balancing algorithm according to the number of workers. The selected algorithms were implemented and highlighted the same conclusions in a real-world setting. Analysis time is reduced from more than an hour to a few minutes using 12 modest workers, offering a practical solution for efficient large-scale image analysis. --- Newsletter https://lnkd.in/emCkRuA More story https://lnkd.in/enY7VpM LinkedIn https://lnkd.in/ehrfPYQ6 #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning #ComputerVision
To view or add a comment, sign in
-
-
Can Multimodal LLMs See Materials Clearly? A Multimodal Benchmark on Materials Characterization https://lnkd.in/eW8GbNax Materials characterization is fundamental to acquiring materials information, revealing the processing-microstructure-property relationships that guide material design and optimization. While multimodal large language models (MLLMs) have recently shown promise in generative and predictive tasks within materials science, their capacity to understand real-world characterization imaging data remains underexplored. To bridge this gap, we present MatCha, the first benchmark for materials characterization image understanding, comprising 1,500 questions that demand expert-level domain expertise. MatCha encompasses four key stages of materials research comprising 21 distinct tasks, each designed to reflect authentic challenges faced by materials scientists. Our evaluation of state-of-the-art MLLMs on MatCha reveals a significant performance gap compared to human experts. These models exhibit degradation when addressing questions requiring higher-level expertise and sophisticated visual perception. Simple few-shot and chain-of-thought prompting struggle to alleviate these limitations. These findings highlight that existing MLLMs still exhibit limited adaptability to real-world materials characterization scenarios. We hope MatCha will facilitate future research in areas such as new material discovery and autonomous scientific agents. MatCha is available at this https URL. --- Newsletter https://lnkd.in/emCkRuA More story https://lnkd.in/enY7VpM LinkedIn https://lnkd.in/ehrfPYQ6 #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning #ComputerVision
To view or add a comment, sign in
-
-
When we talk about uncertainty in AI, a really good example of epistemic uncertainty is something called algorithmic non-determinism. Notice the careful language there. Not algorithmic stochasticity. Non-determinism. What does that mean? Computers are advanced, but they are still physical machines. A cosmic gamma ray flipping a bit, differences in timing at the speed of light inside your processor, or the order in which parallel operations complete — all of these can make the exact same program produce different results when you run it twice. It is not true randomness. It is not dice being rolled. It is a side effect of the fact that our measurement, timing, and control are never perfect. That is epistemic uncertainty. It comes from us and the limits of the systems we build. This is why you can run an AI model twice with the same input and get slightly different answers. It is not magic. It is not stochasticity. It is the messiness of computation at scale. And it matters, because when people point at this behaviour and say “look, the AI is random,” they are missing the point. It is still a deterministic system, just one riddled with the same small cracks as every other piece of engineering we’ve ever built. Footnote: this is why reproducibility in computing has limits — parallelism, floating point precision, and hardware noise all introduce non-determinism. #machinelearning #ai #genai #agent #agentic #llm #artificiallife #contextengineering #promptengineering
To view or add a comment, sign in
-
When we talk about uncertainty in AI, a really good example of epistemic uncertainty is something called algorithmic non-determinism. Notice the careful language there. Not algorithmic stochasticity. Non-determinism. What does that mean? Computers are advanced, but they are still physical machines. A cosmic gamma ray flipping a bit, differences in timing at the speed of light inside your processor, or the order in which parallel operations complete — all of these can make the exact same program produce different results when you run it twice. It is not true randomness. It is not dice being rolled. It is a side effect of the fact that our measurement, timing, and control are never perfect. That is epistemic uncertainty. It comes from us and the limits of the systems we build. This is why you can run an AI model twice with the same input and get slightly different answers. It is not magic. It is not stochasticity. It is the messiness of computation at scale. And it matters, because when people point at this behaviour and say “look, the AI is random,” they are missing the point. It is still a deterministic system, just one riddled with the same small cracks as every other piece of engineering we’ve ever built. Footnote: this is why reproducibility in computing has limits — parallelism, floating point precision, and hardware noise all introduce non-determinism. #machinelearning #ai #genai #agent #agentic #llm #artificiallife #contextengineering #promptengineering
To view or add a comment, sign in
-
Can you spot when reality has been subtly rewritten by #AI? “FakeParts” are AI-generated videos that alter only fragments of a scene, swapping faces, erasing objects, or restyling backgrounds, so seamlessly that even experts and state-of-the-art detection #models often miss them. Researchers from École Polytechnique, ENSTA (Institut Polytechnique de Paris) and CNRS, together with Hi! PARIS engineering team, studied this new phenomenon and found that existing detection tools frequently fail, leaving us more exposed to misinformation. To address this challenge, they created #FakePartsBench, the first benchmark designed to detect these partial #deepfakes. The work highlights a pressing need: rethinking what we trust online, developing sharper detection tools, and advancing AI research for digital trust. 📖 Read the full paper: https://lnkd.in/efiutCmh 💻 Explore the code: https://lnkd.in/euE4DHtQ & the data set: https://lnkd.in/eS9V78sS Hi! PARIS Engineering Team: Soobash Daiboo, Samy Aimeur, Awais Hussain SANI | Hi! PARIS Chairs: Xi WANG, Vicky Kalogeiton & Affiliate Gianni Franchi #FakeParts #AIforSociety #DigitalTrust #Innovation
To view or add a comment, sign in
-
Image Quality Enhancement and Detection of Small and Dense Objects in Industrial Recycling Processes https://lnkd.in/e-TvM7Mu This paper tackles two key challenges: detecting small, dense, and overlapping objects (a major hurdle in computer vision) and improving the quality of noisy images, especially those encountered in industrial environments. [1, 2]. Our focus is on evaluating methods built on supervised deep learning. We perform an analysis of these methods, using a newly de- veloped dataset comprising over 10k images and 120k in- stances. By evaluating their performance, accuracy, and com- putational efficiency, we identify the most reliable detection systems and highlight the specific challenges they address in industrial applications. This paper also examines the use of deep learning models to improve image quality in noisy industrial environments. We introduce a lightweight model based on a fully connected convolutional network. Addition- ally, we suggest potential future directions for further enhanc- ing the effectiveness of the model. The repository of the dataset and proposed model can be found at: this https URL, this https URL --- Newsletter https://lnkd.in/emCkRuA More story https://lnkd.in/enY7VpM LinkedIn https://lnkd.in/ehrfPYQ6 #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning #ComputerVision
To view or add a comment, sign in
-
-
👉🏽 Is it imitation's bitter-bitter lesson ? When imitation becomes the world model! Two different approaches,benchmarks, similar results. ✅ ARC-AGI/2 -3 : Humans as benchmark (For folks also who believe in anthropomorphism and human-centric approaches) ➡️ Pure LLMs score 0% on ARC-AGI-2, and public AI reasoning systems achieve only single-digit percentage scores. In contrast, every task in ARC-AGI-2 has been solved by at least 2 humans in under 2 attempts ✅ Super ARC metrics : Without human in equation based on uses advanced compression and a specialised type of probability,drawing upon the equivalence between compressibility and predictability established in the theory of randomness ➡️ Findings strengthen the suspicion regarding the fundamental limitations of LLMs, exposing them as systems optimised for the perception of mastery over human language.Reported that top-performing LLMs currently perform close to pure-copy solutions, with even advanced models struggling to produce correct model extraction and predictive results. These results would also imply a poor performance of LLMs in traditional tests ✅ Both the above tests confirm Lingering Limitations of these Largely Linguistic models. The outcomes and impacts of Agents/Agentic Architectures built on LLMs, a bitter-bitter pill at the foundation. 1st bitter lesson: Brute-force computation and general methods scale better than specialized, handcrafted human knowledge. 2nd emerging bitter lesson: The current generation of LLMs, despite their scale, have fundamental architectural limitations #llms #arcagi #superarc #ai links in comments
To view or add a comment, sign in
-
Beyond Brute Force: Rethinking Artificial Intelligence --- While the dominant AI narrative is focused on scale, larger models, more data, vast computational power, this brute-force approach is reaching its limits. The gains are diminishing, the costs are astronomical, and the models remain static, unable to learn meaningfully from experience or form lasting relationships. --- We are taking a fundamentally different approach. --- Inspired by the adaptive nature of biological systems, our architecture is built on a fractal model of intelligence, modular, recursive, and capable of organic growth. Each unit in the system is a specialized neural entity with its own capacity for memory, adaptation, and evolution. --- These neurons reorganize themselves based on experience, adjust their internal structures, and form dynamic pathways that change with context. --- Communication and intelligence emerge not from scale, but from interaction and self-organization. --- Memory in our system spans multiple levels: immediate context, evolving patterns, and experiential learning over time. This allows for continuity, real understanding, and meaningful growth, hallmarks of real intelligence, not just simulation. --- The model operates on standard hardware, learns continuously without retraining, and retains context across sessions, demonstrating that intelligence can emerge without excessive computation. --- Like the Wright brothers, who succeeded not by building bigger engines but by understanding the principles of flight, we believe the next leap in AI will come from new architectures, not just more power. --- Intelligence is not a function of scale alone. It’s a product of structure, adaptability, and experience. --- Evvolvx Inc. #evvolvx #biology #technology #limitless #humanperformance #machinelearning #artificialintelligence #ai
To view or add a comment, sign in
More from this author
Explore related topics
- Object Detection and Segmentation in Computer Vision
- Techniques to Detect AI-Generated Forgeries
- How Automation can Transform Scientific Research
- How AI is Transforming Identity Verification
- The Impact of AI on Data Accuracy
- Exploring the Intersection of Traditional Art and AI
- How Data Influences AI Outcomes
- How AI is Transforming Threat Detection Methods
- Understanding the Limitations of Detection Algorithms
- Impact of AI Tools on Traditional Artists
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
PhD Researcher in Language Education | ELT Specialist | MOOC Facilitator | Academic Content Developer PhD Scholar at B S ABDUR RAHMAN CRESCENT INSTITUTE OF SCIENCE AND TECHNOLOGY, CHENNAI
3wInteresting