✨ What kinds of data exist on #peace and #conflict? And how can #AI and data analysis tools help us better understand and support peace processes? Our new online course, led by PeaceRep’s Data Director Sanja Badanjak, tackles these questions and more. Starting in January 2026, this six-week course will: 🔹 Introduce the PA-X Peace Agreements Database 🔹 Show you how to access, analyse and visualise peace data 🔹 Equip you with practical tools to strengthen peace process research 📅 Applications are open until 9 October. 👉 Learn more and apply here: https://lnkd.in/eAJ2YzcV
New online course on peace data analysis with AI tools
More Relevant Posts
-
👋Do you have PEFA questions? Yes? 💡Have you met AskPEFA? ✨AskPEFA is our new AI-powered chatbot built into the PEFA website, that helps PFM practitioners, government officials, and partners navigate PEFA resources faster and more intuitively. 🧭From accessing PEFA information, and locating PEFA resources, to answering PEFA methodology related questions, AskPEFA is designed to make public financial management insights more accessible. 🙋♀️Learn what AskPEFA is, how it works, and how to get started in our latest website story: https://lnkd.in/eEH3YY4P #PEFA #PFM #AI #DigitalInnovation #LetsTalkPFM #LetsAskPEFA PEFA Secretariat Srinivas Gurazada Dmitri Gourfinkel Victor B. Mona El-Chami Joseph Dalibon Álvaro Fernández Néné Mané Ashikur Rahman Silvia Kirova Caitlyn McCrone
To view or add a comment, sign in
-
-
In private markets, data isn’t just reporting - it’s strategy. Institutional investors today face geopolitical uncertainty, market volatility, and the rise of AI-driven decision-making. But without granular, high-quality data, even the most sophisticated tools fall short. Our latest paper explores: 🔍 Why “look-through” data is critical for LPs and GPs ⚡ The risks of incomplete or GP-reported-only data 🤖 How AI in private markets depends on clean, reliable inputs If you want to turn uncertainty into opportunity, start with your data. Download the full paper here: https://lnkd.in/eZfsdrN5 #privatemarkets #CEPRES #data #AI
To view or add a comment, sign in
-
-
This week Randstad Enterprise released our global in-demand skills report. Always one of our most eagerly awaited reports, if you have responsibility for workforce planning, talent management, recruitment, retention or skills of the future in your organisation, this is vital data you simply must have. While there are some predictable findings this year, such as AI skills being in high demand, there are also some interesting skills being sought after. These include marketing, content and advertising; skills some thought might be automatable via AI. The world of work is clearly changing and our report will help you stay informed of what those changes are - both the predictable and the surprising. Follow the link in the post below to access the data.
🌍 Today's Global In-demand Skills from Randstad Enterprise. This research analyzes today’s top in-demand skills for enterprises across 6 different dimensions and 24 markets globally. Access the data to understand today’s labor market complexity, skills availability and the impact of AI on skills. Gain access to the research here: https://lnkd.in/enD6kBYY
To view or add a comment, sign in
-
-
A new era of business intelligence(BI) is here. In a recent ZDNet Korea interview https://lnkd.in/gXySzzF2, my colleague Fay(Tianqi) Fei discusses the power of Perceptive Analytics and its immense potential. It's inspiring to see how Fay is introducing our team's Analytics and AI research to help shape this future in the Korean market. Check out the article to learn more! #GartnerDA
To view or add a comment, sign in
-
-
🚨 BREAKING: Comprehensive Document on Algorithmic Use in the Workplace The European Commission joint research centre (JRC) has just published a comprehensive and detailed document - 183 pages, no less! - titled ‘Digital Monitoring, Algorithmic Management and the Platformisation of Work in Europe’. Another heavy read, that explores in detail the impacts resulting of the use of digital tools, the potential shortcomings in digital monitoring and the current and future algorithmic management practices. 👉🏼 Of particular importance, the use of AI and GenAI tools trajectories for millions of workers across the EU27 - given that 90% of the workforce admits using these tools in their profession; 👉🏼 The researchers’ take on platformisation is very interesting too, considering the gaps left out by the Platform Work Directive - and the many challenges it faces in granting that platform workers are treated in a fair and compliant way. #BREAKING #EU #Commission #Algorithms #AI #GenAI #work CC: Markus Frischhut Chiara Gallese, Ph.D. David Wagner Peter Hense 🇺🇦🇮🇱
To view or add a comment, sign in
-
LLMs Don't Like to Think!!! I know this sounds contradictory, but it’s true. The more complex the reasoning and data relationships you ask an LLM to handle in a single prompt, the higher the chance of errors. The key is to reduce the "cognitive load." I significantly increased the accuracy of my personal trainer AI, MetricSelf, by simplifying its tasks before the prompt. Here's 2 ways how: From Complex IDs to Simple Maps: Instead of passing long, complex database IDs for exercises, I pre-process them into simple placeholders like "ex_1" and "ex_2". I pass these to the LLM and then use a map to revert the simple IDs back to their original form in the final output. The LLM's task just became exponentially easier. From Relational Data to Embedded Context: Initially, I gave the LLM a set of hypotheses and a separate list of exercise definitions, asking it to link them by ID. Now, I pre-process the data to embed the relevant hypothesis directly into each exercise definition. This eliminates the need for the LLM to perform a "join" operation in its "head," drastically reducing errors. The takeaway is simple: Do the heavy lifting for your LLM. Don't make it think more than it has to. #AI #LLM #MachineLearning #DataEngineering #PromptEngineering #ArtificialIntelligence #MetricSelf
To view or add a comment, sign in
-
ReadyIntelligence gives membership organisations AI-powered analytics and predictions in plain English. No complex dashboards, no data migration. - Ask questions, get instant charts and forecasts - Unify data from multiple systems securely - Make smarter decisions faster 💬 “ReadyIntelligence helps us interrogate data and understand our members better.” — Rennie Schafer CEO, FEDESSA - Federation of European Self Storage Associations 🎥 Watch the video below to see it in action. 👉 Learn more: https://lnkd.in/evkxNF7f #ReadyIntelligence #DataAnalytics #AI #MembershipInnovation #Pixl8Group
To view or add a comment, sign in
-
-
1️⃣2️⃣3️⃣ Triple publication on impact! At EU DisinfoLab, we’re doubling down on one of the toughest challenges in countering disinformation: 𝗜𝗠𝗣𝗔𝗖𝗧 𝗠𝗘𝗔𝗦𝗨𝗥𝗘𝗠𝗘𝗡𝗧. And we don’t stop at measuring – we aim to drive it. 📇 In collaboration with Amaury L., we’ve updated our 𝗜𝗺𝗽𝗮𝗰𝘁-𝗥𝗶𝘀𝗸 𝗜𝗻𝗱𝗲𝘅 – which estimates the impact risk of individual hoaxes – to reflect the latest advances in AI and coordination techniques: https://lnkd.in/eBm_GcYX 🧮 Thanks to Amaury, we are also releasing an updated automated 𝗜𝗺𝗽𝗮𝗰𝘁 𝗖𝗮𝗹𝗰𝘂𝗹𝗮𝘁𝗼𝗿, based on the index, to streamline and improve assessments via data standardisation: https://impact-risk.eu/ 💡 ...and finally, we're sharing a concise overview of a report (elaborated for the #veraAI project) mapping 𝗵𝗼𝘄 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝗶𝗺𝗽𝗮𝗰𝘁 and where the gaps remain: https://lnkd.in/e_Kbvjau #Disinformation #ImpactMeasurement #Research #AI #CIB
To view or add a comment, sign in
-
-
One of the toughest challenges in tackling disinformation isn’t just spotting it — it’s understanding its impact. Based on the amazing work of Raquel Miguel Serrano and EU DisinfoLab in 2022 I‘ve built and released a new Impact Calculator. It’s designed to make impact assessment easier, faster, and more consistent by automating calculations based on the Impact-Risk Index: https://impact-risk.eu/
1️⃣2️⃣3️⃣ Triple publication on impact! At EU DisinfoLab, we’re doubling down on one of the toughest challenges in countering disinformation: 𝗜𝗠𝗣𝗔𝗖𝗧 𝗠𝗘𝗔𝗦𝗨𝗥𝗘𝗠𝗘𝗡𝗧. And we don’t stop at measuring – we aim to drive it. 📇 In collaboration with Amaury L., we’ve updated our 𝗜𝗺𝗽𝗮𝗰𝘁-𝗥𝗶𝘀𝗸 𝗜𝗻𝗱𝗲𝘅 – which estimates the impact risk of individual hoaxes – to reflect the latest advances in AI and coordination techniques: https://lnkd.in/eBm_GcYX 🧮 Thanks to Amaury, we are also releasing an updated automated 𝗜𝗺𝗽𝗮𝗰𝘁 𝗖𝗮𝗹𝗰𝘂𝗹𝗮𝘁𝗼𝗿, based on the index, to streamline and improve assessments via data standardisation: https://impact-risk.eu/ 💡 ...and finally, we're sharing a concise overview of a report (elaborated for the #veraAI project) mapping 𝗵𝗼𝘄 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝗶𝗺𝗽𝗮𝗰𝘁 and where the gaps remain: https://lnkd.in/e_Kbvjau #Disinformation #ImpactMeasurement #Research #AI #CIB
To view or add a comment, sign in
-
-
Many machine learning practitioners still find it challenging to connect ML evaluation metrics with their statistical foundations — especially precision and recall. Both terms actually originate from information retrieval, and they tie directly to Type I and Type II errors in statistics. Yet, even experienced ML practitioners often overlook this relationship. Here’s a quick refresher: 🔹 Type I Error (False Positive) — Concluding something is true when it isn’t. 🔹 Type II Error (False Negative) — Failing to detect something that is actually true. Now, let’s connect this to ML metrics: ✅ Precision is about minimizing Type I Errors (False Positives) High precision → Few false positives → Low Type I error rate In other words, you’re being conservative in predicting positives — avoiding false alarms. ✅ Recall is about minimizing Type II Errors (False Negatives) High recall → Few false negatives → Low Type II error rate Here, you’re being aggressive in catching all positives — ensuring you don’t miss anything. ⚖️ The trade-off: Make your test more strict → Fewer Type I errors (↑ precision), but more Type II errors (↓ recall). Make your test more lenient → Fewer Type II errors (↑ recall), but more Type I errors (↓ precision). The key takeaway: 👉 Precision focuses on the cost of false alarms, 👉 Recall focuses on the cost of missed detections. Hopefully, this sheds new light on how these concepts connect — bridging statistics and machine learning in a practical way. (Image credit: “The Essential Guide to Effect Sizes” by Paul D. Ellis) #Precision #Recall #MachineLearning #Statistics #ComputerScience #AI #DataScience #ML
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development