The MuseIT Project (Horizon Europe) concluded with its final event in Sweden - a celebration of how technology, creativity, and co-design can transform cultural participation! At the Textile Fashion Centre in Borås, the BHV team presented the MuseIT Virtual Museum - a fully immersive digital exhibition with 2D/3D artefacts, haptic feedback, layered audio design (music, narration, and sound effects), and affective-computing technologies powered by non-invasive EEG, heart rate, and skin conductance sensors. At the Röhsska Museum in Göteborg, a hybrid live music performance connected on-site and remote musicians through low-latency audio technology and real-time emotion recognition from biosignals. This unique setup allowed performers to perceive each other’s expressive states and adapt their performance in real time - showcasing a new form of emotion-aware co-creation at a distance. BHV contributed its expertise in biosignal processing, AI, and virtual reality technologies, helping shape cultural experiences that are inclusive, multisensory, and emotionally meaningful. Read the full blog post here: https://lnkd.in/dC_VbtFb #AffectiveComputing #InclusiveFutures #VirtualReality #CulturalHeritage #DigitalInnovation #HapticTechnology #CoCreation #MultisensoryExperience
Brain, Health & Virtual Reality Group’s Post
More Relevant Posts
-
🏙️ Day 8 of 20 Days of XR in Culture, Heritage & Impact When technology doesn’t just tell stories - it helps collect them. Today’s spotlight is on The Belfast Memory Machine, a SENSEcity XR project exploring how AI can become a companion in uncovering a city’s collective memory. Rather than creating narratives for people, the Belfast Memory Machine invites them to share their own. Through AI-powered dialogue, tactile 3D models, and immersive storytelling tools, it becomes a living space where personal memories, cultural traditions, and lived experiences are captured and preserved. What makes this project special is how it rethinks the role of technology, from broadcasting to listening, from displaying stories to gathering them. It ensures that the voices shaping a city’s history come directly from its people. 🔗Learn more: https://lnkd.in/eVFUnHBy 👥Created by: SENSEcity 🎙️ With voices from: Communities across Belfast #Day8 #20DaysOfXR #DigitalHeritage #AIPoweredStorytelling #ImmersiveTech #CulturalMemory #CommunityVoices #XRforGood #SENSEcity #HeritageInnovation #StorytellingWithTech Pooja Katara Missy Sully
To view or add a comment, sign in
-
🎨 Innovation in a museum doesn't have to be complex or expensive. Example of a fun low budget pilot we built (and it uses AI): People often think that visitors are expecting these crazy, super complex immersive experiences in museums. Often the most fun and memorable experiences are the ones that are simple and easy to play with! And if that means not having to carve large budgets from the museum, it's even better. This project was the perfect example of this combination: - It increases visitor engagement and instills interaction via touch screens - It can be used to enhance exhibitions (in this case, using an artist as a reference) - Fun and memorable, allows visitors to walk out with their own AI-enhanced creations - Affordable for the museum, and easy to install It was interesting is to use Adobe Firefly on the backend to enhance creations. An ethically sourced AI that is museum friendly, and that can be used in different ways in different projects. Worth highlighting this project also had its challenges: Changes in scope, tight timeline and budget, etc. But we walked out with something simple, fun, intuitive, and that uses AI in an ethical way to help enhance visitor creations (and their experience!). We would love to hear your thoughts on this one? If you want this or something similar for your next museum exhibit, send us a DM! #MuseumExperience #ImmersiveExperience #Metaverse #DigitalMuseum #CulturalExperience #ImmersiveCultural
To view or add a comment, sign in
-
Cup with the President – Sound in Motion Today’s Cup with the President took me deep into the world of sound — or better: into worlds created by sound. I visited Prof. Sebastian Schlecht and his team at the Department of Electrical Engineering (EEI) at FAU Erlangen-Nürnberg. Sebastian leads the Artificial Audio Research Group at the Chair of Multimedia Communications and Signal Processing (LMS) — in his team are Dr. Heiner Löllmann, Jeremy Bai, and Cristóbal Andrade. Their mission: to understand, model, and design how we perceive sound — from the smallest echo to full 3D acoustic spaces. Their research focuses on spatial audio, virtual acoustics, and 6-degrees-of-freedom audio for virtual and augmented reality. They develop mathematical models and algorithms that make it possible to simulate realistic environments — sound that feels real, no matter where you move. Their work connects engineering, psychoacoustics, and sound design. It is not just about hearing — it’s about experiencing space through sound. The demos I habe seen were stunning. In one experiment, I entered a virtual orchestra and played the cello — surrounded by musicians whose sound moved with me in space. In another, I stood in an acoustic laboratory with 128 loudspeakers. Sebastian’s algorithms transformed this room at the press of a button — suddenly it sounded like a cathedral, then like a massive exhibition hall. You don’t just hear the change — you feel it. And the applications? Impressive: these technologies could bring a concert-hall experience right into a car, or transform the acoustic design of buildings and VR worlds. What impressed me most is how seamlessly this group combines mathematics, algorithms, machine learning, and human perception. It’s a perfect example of what we mean at FAU when we say: Moving Knowledge. From abstract theory to real experience — from formulas to feeling. This is innovation at its finest. And it shows again: at FAU, knowledge is not static. It moves — it resonates — and it inspires.
To view or add a comment, sign in
-
-
🧠 Phase 5: The Echoic Brain & Preprogramming - Every Immersive Experience Is Playing an Ancient Instrument: The Human Brain The speed of sound in the brain. Audition is our fastest sense. The human brain can detect temporal gaps as short as 3 milliseconds, faster than vision or touch. This speed once meant survival: hearing a predator before seeing it. Today, it means that sound still leads our attention, sets our expectations, and directs our emotions before we consciously realize it. Preprogrammed responses. Pitch, rhythm, and timbre are not arbitrary, they are deeply wired into our neurology. A baby instinctively calms to lullabies. A group synchronizes its movements to a beat. A sudden sharp sound spikes adrenaline. These responses are ancient, encoded in circuits evolved to keep us safe, bonded, and alert. The Echoic Brain. Modern neuroscience shows that music and language share overlapping, specialized circuits. We are not neutral listeners; our auditory cortex is constantly mapping, predicting, and reacting to sonic cues. This is why chants unify groups, why melodies unlock memory, and why stories told through sound remain unforgettable. In short: our brains are instruments already tuned to resonate with sound. From survival to design. When we enter a theater, a themed attraction, or an immersive gallery, these ancient pathways are activated. The soundscape does more than fill the air; it plays us. Designers who understand this aren’t just delivering audio; they are orchestrating emotion, memory, and meaning. 💭 What If every immersive experience is not about creating something new, but about striking chords already pre-wired into the human brain, resonances that have been shaping survival, bonding, and culture for millions of years? 👉 This is Phase 5 of The Echoic Lineage of Humanity. From survival to song, from ritual to culture, every step has led us here, to an understanding that sound is not decoration, but the driver of immersion. Stay tuned for Phase 6: TruSound & The Four Realms - What If the Next Evolution of Immersive Spaces Isn’t Invention, but Remembering? #ImmersiveExperiences #TruSound #EchoicBrain #ImmersionCatalyst #TheFourRealms #Storytelling #Psychoacoustics #NeuroscienceOfSound #ImmersiveDesign
To view or add a comment, sign in
-
-
🚀 Introducing MM Audio on Scenario 🚀 An advanced video-to-audio synthesis model that transforms silent videos into immersive sound experiences. Developed by researchers from the University of Illinois Urbana-Champaign, Sony AI, and Sony Group Corporation, MM Audio brings silent videos to life with synchronized, high-fidelity audio. What you can do with MM Audio: - Add realistic soundscapes - Generate sound effects - Create immersive environmental audio Ideal for enhancing video projects with natural, AI-generated sound. 👉 Access the model: https://lnkd.in/esA4-Nm7 Read more about it: https://lnkd.in/eNcBhG6U #Scenario #AudioGeneration #AI #Innovation
To view or add a comment, sign in
-
Today we had the chance to join Aalto University and Business Finland at Marsio House for The Intersection: XR – Business – Creativity. We explored how immersive technologies are shaping the future of business, culture and experiences: • A virtual sauna where tradition meets digital immersion • A music space where sound was created simply by moving your hands • A virtual studio capable of modelling an international opera stage directly into the Opera House The event highlighted how XR and AI together can create not only smarter tools but also more human-centric experiences. Technology is at its best when it amplifies creativity, strengthens cultural connection, and makes complex ideas more accessible. A great example of this vision is Xtreme ITU, a hub for extended reality, gaming and digital creativity that brings together research, technology and business. Excited to see how Finland can turn these innovations into real impact — with AI, XR and human creativity working hand in hand.
To view or add a comment, sign in
-
✨There’s a quiet shift happening in our industry. The same simulation tools that shaped aerospace and manufacturing are now being used to design theme parks, museums, and immersive experiences. Until recently, these tools were out of reach: expensive, complex, and built for programmers. But now, creatives and producers can use them directly to simulate visitor journeys, test ideas, and make confident decisions before a single wall is built. It’s a new way to de-risk creativity and build ambitious projects faster, smarter, and with fewer surprises. We’ve just shared more about Meow Wolf being one of the early adopters in today’s press release. It's the first of a few stories we’ll be telling about how digital twins are changing LBE from the inside out. Stay tuned 👀 https://lnkd.in/eHZaQQpH
To view or add a comment, sign in
-
🎧 Next week, #GuestXR researcher from Eurecat - Technology Centre Enric Gusó Muñoz will be present at the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics #WASPAA — a leading event for the audio and acoustics research community. Our team will present two papers highlighting recent advances in signal processing and machine learning applied to immersive environments: 📄 MB-RIRs: a Synthetic Room Impulse Response Dataset with Frequency-Dependent Absorption Coefficients 🔗 https://lnkd.in/dDgaPyp6 📄 Conditioned Wave-U-Net for Acoustic Matching of Speech in Shared XR Environments We look forward to sharing our results, engaging with the community, and exploring new possibilities for audio processing in XR and immersive technologies. #SignalProcessing #AudioResearch #Acoustics #MachineLearning #XR #HorizonEurope
To view or add a comment, sign in
-
-
Dirac Launches Advanced In-Car Audio with Venue-Based Acoustics in NIO ES8 * The third-generation NIO ES8 unveiled during NIO Day 2025 in Hangzhou, China, integrates Dirac's advanced in-car audio technologies like Reference Room Mode and Dirac Dimensions, setting a new standard for immersive audio. * Dirac's Reference Room Mode enables passengers to experience three distinct acoustic venues: Legendary Studio, Pinnacle Stage, and Golden Hall, simulating small, medium, and large soundspaces respectively. * Dirac Dimensions applies patented spatial audio techniques, such as MIMO impulse response correction, to transform stereo content into immersive multichannel soundscapes for a balanced and enveloping experience. * The collaboration between NIO and Dirac, ongoing since 2020, underscores a commitment to combining automotive innovation with cutting-edge audio technology for superior in-car environments. * The NIO ES8's audio system uses precise speaker array management to authentically reproduce spatial sound characteristics, enhancing realism and emotional engagement for all passengers. * This partnership exemplifies how software innovations can redefine luxury in vehicles, blending sound quality and immersive experiences with NIO's design and technological advancements.
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development