About
I believe one day we will live in an automated world where everything will simply work…
Articles by Amit
Activity
Experience
Education
-
INSA Lyon - Institut National des Sciences Appliquées de Lyon
-
5th year in exchange student program at ASTON University, Birmingham, UK
3rd - 4th years in Master of Computer Software Development Engineering department
1st and 2nd years in an International program -
Management, leadership and effective presentations skills combined with one academic year
Licenses & Certifications
Volunteer Experience
-
Co-Founder
Israel Robotics
- Present 8 years
Science and Technology
● Led the Israel Robotics Meetup, fostering a vibrant community of robotics enthusiasts, professionals, and students.
● Secured funding and established strategic partnerships with industry leaders, universities, and government entities. -
Co-Founder
Bay Area Robotics Network
- Present 10 months
Science and Technology
Co-founding an organization to organize and connect Bay Area-based founders of growth-stage robotics companies.Co-founding an organization to organize and connect Bay Area-based founders of growth-stage robotics companies.
Publications
Patents
-
Device and system for docking an aerial vehicle
Issued US US11011066B2
A system for securing an aerial vehicle to a lower portion of a docking station, including a docking station having a top section located in an upper portion of the docking station, the top section having an interface configured to hang the docking station above the ground and a bottom section located in a lower portion of the docking station, the docking station having a latching mechanism located on the bottom section, configured to secure the aerial vehicle to the docking station, the system…
A system for securing an aerial vehicle to a lower portion of a docking station, including a docking station having a top section located in an upper portion of the docking station, the top section having an interface configured to hang the docking station above the ground and a bottom section located in a lower portion of the docking station, the docking station having a latching mechanism located on the bottom section, configured to secure the aerial vehicle to the docking station, the system also including the aerial vehicle having a docking member configured to dock the aerial vehicle into the docking station and to release the aerial from the latching mechanism of the docking station, and a processing module configured to control the operation of the docking member.
Other inventorsSee patent -
Rotatable mobile robot for mapping an area and a method for mapping the same
Issued US US15/993,624
The subject matter discloses a mobile robot configured to map an area, comprising a body, two or more distance sensors, configured to collect distance measurements between the mobile robot and objects in the area, a rotating mechanism mechanically coupled to the body and to the two or more distance sensors, said rotating mechanism is configured to enable rotational movement of the two or more distance sensors and a processing module electrically coupled to the two or more distance sensors and…
The subject matter discloses a mobile robot configured to map an area, comprising a body, two or more distance sensors, configured to collect distance measurements between the mobile robot and objects in the area, a rotating mechanism mechanically coupled to the body and to the two or more distance sensors, said rotating mechanism is configured to enable rotational movement of the two or more distance sensors and a processing module electrically coupled to the two or more distance sensors and to the rotating mechanism. The processing module is configured to process the distance measurements collected by the two or more distance sensors and to instruct the rotating mechanism to adjust a velocity of the rotational movement, said velocity is adjusted according to the distance measurements collected by the two or more distance sensors.
Other inventorsSee patent -
A computerized system for guiding a mobile robot to a docking station and a method of using same
Filed US WO2020003304A1
The claimed subject matter discloses a method for guiding a mobile robot moving in an area to a docking station, the method comprising determining that the mobile robot is required to move to the docking station, the docking station obtaining a location and/or position of the mobile robot, upon detection of the mobile robot's location in the area, calculating a navigation path from the mobile robot's location to the docking station, the mobile robot moving towards the docking station in…
The claimed subject matter discloses a method for guiding a mobile robot moving in an area to a docking station, the method comprising determining that the mobile robot is required to move to the docking station, the docking station obtaining a location and/or position of the mobile robot, upon detection of the mobile robot's location in the area, calculating a navigation path from the mobile robot's location to the docking station, the mobile robot moving towards the docking station in accordance to the calculated navigation path, identifying that the mobile robot is within a predefined distance from said docking station; and generating docking commands to the mobile robot when located within the predefined distance from said docking station until the mobile robot docks into the docking station.
Other inventorsSee patent -
Map generating robot
Filed US WO2019215720A1
The clained invention discloses a mobile robot comprising a robot body, a drive system configured to maneuver the robot body in a predefined area, a controller coupled to the drive system, said controller comprising a processor a memory; and a sensor module in communication with the controller, the sensor module comprises at least one non-optical sensor, configured to gather non-optical data from the predefined area. The mobile robot also comprises a communication module configured to send…
The clained invention discloses a mobile robot comprising a robot body, a drive system configured to maneuver the robot body in a predefined area, a controller coupled to the drive system, said controller comprising a processor a memory; and a sensor module in communication with the controller, the sensor module comprises at least one non-optical sensor, configured to gather non-optical data from the predefined area. The mobile robot also comprises a communication module configured to send signals to electronic devices in the predefined area, the signals transmitted from the communication module induce emission of signals from the electronic devices in the predefined area. The processor is configured to generate at least one map of the predefined area using data processed from said non-optical data.
Other inventorsSee patent -
STATE MACHINE BASED TRACKING SYSTEM FOR SCREEN POINTING CONTROL
Issued US 14/816,953
Generally, this disclosure provides systems, devices, methods and computer readable media for state machine based pointing control. A method may include receiving a position estimate of a first location associated with a first portion of a pointing device and a position estimate of a second location associated with a second portion of the pointing device; calculating a vector from the estimated position of the first location to the estimated position of the second location; and resolving the…
Generally, this disclosure provides systems, devices, methods and computer readable media for state machine based pointing control. A method may include receiving a position estimate of a first location associated with a first portion of a pointing device and a position estimate of a second location associated with a second portion of the pointing device; calculating a vector from the estimated position of the first location to the estimated position of the second location; and resolving the vector into a first distance component (Dx) and a second distance component (Dy), the Dy component orthogonal to the Dx component. The method may further include tracking temporal changes of the Dx and Dy components; updating an interaction state based on a rate of change of the Dx and Dy components; and moving a cursor position on a display element screen based on the temporal change and the interaction state.
-
Techniques for providing an augmented reality view
Issued US 10008010
Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator…
Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator identifying an item of possible interest in the captured images on the display overlying the visual presentation of the captured images; and a correlation component for execution by the processor component to track eye movement to determine a portion of the display gazed at by an eye, and to correlate the portion of the display to the item of possible interest. Other embodiments are described and claimed
Other inventorsSee patent -
MACHINE OBJECT DETERMINATION BASED ON HUMAN INTERACTION
Issued US 9975241
This disclosure pertains to machine object determination based on human interaction. In general, a device such as a robot may be capable of interacting with a person (e.g., user) to select an object. The user may identify the target object for the device, which may determine whether the target object is known. If the device determines that target object is known, the device may confirm the target object to the user. If the device determines that the target object is not known, the device may…
This disclosure pertains to machine object determination based on human interaction. In general, a device such as a robot may be capable of interacting with a person (e.g., user) to select an object. The user may identify the target object for the device, which may determine whether the target object is known. If the device determines that target object is known, the device may confirm the target object to the user. If the device determines that the target object is not known, the device may then determine a group of characteristics for use in determining the object from potential target objects, and may select a characteristic that most substantially reduces a number of potential target objects. After the characteristic is determined, the device may formulate an inquiry to the user utilizing the characteristic. Characteristics may be selected until the device determines the target object and confirms it to the user.
Other inventorsSee patent -
A device, system and a method for docking a flying apparatus
Filed IL IL259252A
A docking station for an aerial drone, including a base portion having a top surface, an alignment system positioned at the top surface of the base portion, having inclined wall portions extending downwards from the top surface to form a docking recess disposed in the top surface, configured to mechanically orient the aerial drone by sliding at least a portion of the aerial drone therein, a friction reducing mechanism, embedded or located on the inclined wall portions; and a connection module…
A docking station for an aerial drone, including a base portion having a top surface, an alignment system positioned at the top surface of the base portion, having inclined wall portions extending downwards from the top surface to form a docking recess disposed in the top surface, configured to mechanically orient the aerial drone by sliding at least a portion of the aerial drone therein, a friction reducing mechanism, embedded or located on the inclined wall portions; and a connection module for connecting to the aerial drone upon landing.
Other inventorsSee patent -
Technologies for adjusting a perspective of a captured image for display
Issued US US 14/488,516
Technologies for adjusting a perspective of a captured image for display on a mobile computing device include capturing a first image of a user by a first camera and a second image of a real-world environment by a second camera. The mobile computing device determines a position of an eye of the user relative to the mobile computing device based on the first captured image and a distance of an object in the real-world environment from the mobile computing device based on the second captured…
Technologies for adjusting a perspective of a captured image for display on a mobile computing device include capturing a first image of a user by a first camera and a second image of a real-world environment by a second camera. The mobile computing device determines a position of an eye of the user relative to the mobile computing device based on the first captured image and a distance of an object in the real-world environment from the mobile computing device based on the second captured image. The mobile computing device generates a back projection of the real-world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
Other inventorsSee patent -
AUGMENTATION MODIFICATION BASED ON USER INTERACTION WITH AUGMENTED REALITY SCENE
Issued US 14/667,302
Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of…
Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of the physical scene based on a result of the analysis, track user interaction with the rendered augmented scene, and modify or complement the virtual articles in response to the tracked user interaction. Other embodiments may be described and claimed.
Other inventorsSee patent -
INTERACTIVE ADAPTIVE NARRATIVE PRESENTATION
Issued US 14/866,454
A narrative presentation system may include at least one optical sensor capable of detecting objects added to the field-of-view of the at least one optical sensor. Using data contained in signals received from the at least one optical sensor, an adaptive narrative presentation circuit identifies an object added to the field-of-view and identifies an aspect of a narrative presentation logically associated with the identified object. The adaptive narrative presentation circuit modifies the aspect…
A narrative presentation system may include at least one optical sensor capable of detecting objects added to the field-of-view of the at least one optical sensor. Using data contained in signals received from the at least one optical sensor, an adaptive narrative presentation circuit identifies an object added to the field-of-view and identifies an aspect of a narrative presentation logically associated with the identified object. The adaptive narrative presentation circuit modifies the aspect of the narrative presentation identified as logically associated with the identified object.
Other inventorsSee patent -
AUGMENTING REALITY VIA ANTENNA AND INTERACTION PROFILE
Filed US 20170365231
With a device comprising a directional antenna, obtain an interaction profile for an augmentable object and augment a sensory experience of the augmentable object according to the interaction profile.
Other inventorsSee patent -
Control system for user apparel selection
Filed US US 20170277365 A1
This disclosure is directed to a control system for user apparel selection. A system may comprise a control device to receive information from at least one sensor in an environment where apparel selection may commonly take place (e.g., closet). The control device may include communication circuitry, user interface circuitry, closet controller circuitry, etc. to receive user sensor data and apparel sensor data from the at least one sensor. Some or all of this data may be provided to at least one…
This disclosure is directed to a control system for user apparel selection. A system may comprise a control device to receive information from at least one sensor in an environment where apparel selection may commonly take place (e.g., closet). The control device may include communication circuitry, user interface circuitry, closet controller circuitry, etc. to receive user sensor data and apparel sensor data from the at least one sensor. Some or all of this data may be provided to at least one external resource such as, for example, an apparel designer, an apparel manufacturer, a feedback accumulation website, etc., to elicit at least styling data. Apparel control logic within the device may utilize the above data along with context data that describes, for example, the event for which the apparel is required, environmental data (e.g., weather), etc. to disposition apparel, suggest at least one piece of apparel to the person, etc.
Other inventorsSee patent -
Technologies for immersive user sensory experience sharing
Filed US 20170188066
Technologies for immersive sensory experience sharing include one or more experience computing devices, an experience server, and a distance computing device. Each experience computing device captures sensor data indicative of a local sensory experience from one or more sensors and transmits the sensor data to the experience server. Sensors may include audiovisual sensors, touch sensors, and chemical sensors. The experience server analyzes the sensor data to generate combined sensory experience…
Technologies for immersive sensory experience sharing include one or more experience computing devices, an experience server, and a distance computing device. Each experience computing device captures sensor data indicative of a local sensory experience from one or more sensors and transmits the sensor data to the experience server. Sensors may include audiovisual sensors, touch sensors, and chemical sensors. The experience server analyzes the sensor data to generate combined sensory experience data and transmits the combined sensory experience data to the distance computing device. The experience server may identify one or more activities associated with the local sensory experience. The distance computing device renders a sensory experience based on the combined sensory experience data. The distance computing device may monitor a user response, generate user preferences based on the user response, and transmit the user preferences to the experience server. Other embodiments are described and claimed.
Other inventorsSee patent -
Multi-distance, multi-modal natural user interaction with computing devices
Issued US PCT/US2013/032469
Systems and methods may provide for receiving a short range signal from a sensor that is collocated with a short range display and using the short range signal to detect a user interaction. Additionally, a display response may be controlled with respect to a long range display based on the user interaction. In one example, the user interaction includes one or more of an eye gaze, a hand gesture, a face gesture, a head position or a voice command, that indicates one or more of a switch between…
Systems and methods may provide for receiving a short range signal from a sensor that is collocated with a short range display and using the short range signal to detect a user interaction. Additionally, a display response may be controlled with respect to a long range display based on the user interaction. In one example, the user interaction includes one or more of an eye gaze, a hand gesture, a face gesture, a head position or a voice command, that indicates one or more of a switch between the short range display and the long range display, a drag and drop operation, a highlight operation, a click operation or a typing operation.
Other inventorsSee patent -
USER EVENTS/BEHAVIORS AND PERCEPTUAL COMPUTING SYSTEM EMULATION
Issued US US-2014/0006,001
Methods, apparatuses and storage medium associated with engineering perceptual computing systems that includes user intent modeling are disclosed herewith. In embodiments, one or more storage medium may include instructions configured to enable a computing device to receive a usage model having a plurality of user event/behavior statistics, and to generate a plurality of traces of user events/behaviors over a period of time to form a workload. The generation may be based at least in part on the…
Methods, apparatuses and storage medium associated with engineering perceptual computing systems that includes user intent modeling are disclosed herewith. In embodiments, one or more storage medium may include instructions configured to enable a computing device to receive a usage model having a plurality of user event/behavior statistics, and to generate a plurality of traces of user events/behaviors over a period of time to form a workload. The generation may be based at least in part on the user event/behavior statistics. The workload may be for input into an emulator configured to emulate a perceptual computing system. Other embodiments may be disclosed or claimed.
Other inventorsSee patent -
ROBOT WITH AWARENESS OF USERS AND ENVIRONMENT FOR USE IN EDUCATIONAL APPLICATIONS
Filed US 14/824,632
Generally, this disclosure provides systems, devices, methods and computer readable media for user and environment aware robots for use in educational applications. A system may include a camera to obtain image data and user analysis circuitry to analyze the image data to identify a student and obtain educational history associated with the student. The system may also include environmental analysis circuitry to analyze the image data and identify a projection surface. The system may further…
Generally, this disclosure provides systems, devices, methods and computer readable media for user and environment aware robots for use in educational applications. A system may include a camera to obtain image data and user analysis circuitry to analyze the image data to identify a student and obtain educational history associated with the student. The system may also include environmental analysis circuitry to analyze the image data and identify a projection surface. The system may further include scene augmentation circuitry to generate a scene comprising selected portions of the educational material based on the identified student and the educational history; and an image projector to project the scene onto the projection surface.
Other inventorsSee patent -
VIRTUAL WEARABLES
Issued US 20160178906
A mechanism is described for dynamically facilitating virtual wearables according to one embodiment. A method of embodiments, as described herein, includes detecting a wearable area. The wearable area may represent a human body part of a primary user. The method may further include scanning the wearable area to facilitate suitability of the wearable area for projection of a virtual wearable, and projecting the virtual wearable on the wearable area using a primary wearable device of the primary…
A mechanism is described for dynamically facilitating virtual wearables according to one embodiment. A method of embodiments, as described herein, includes detecting a wearable area. The wearable area may represent a human body part of a primary user. The method may further include scanning the wearable area to facilitate suitability of the wearable area for projection of a virtual wearable, and projecting the virtual wearable on the wearable area using a primary wearable device of the primary user such that the projecting is performed via a projector of the primary wearable device.
Other inventorsSee patent -
Timing advertisement breaks based on viewer attention level
Issued US US-20140096152
A device and method for timing advertisement breaks in video-on-demand applications based on viewer attention level includes a video device configured to display video content and receive biometric data indicative of the attention level of a viewer. The video device may notify a video-on-demand server that the attention level of the viewer has exceeded a threshold. In response to the notification, the video-on-demand server may determine a time to display advertisement content on the video…
A device and method for timing advertisement breaks in video-on-demand applications based on viewer attention level includes a video device configured to display video content and receive biometric data indicative of the attention level of a viewer. The video device may notify a video-on-demand server that the attention level of the viewer has exceeded a threshold. In response to the notification, the video-on-demand server may determine a time to display advertisement content on the video device. The advertisement break time may be determined in relation to the video content. The advertisement content may be selected based on the video content. The video device may determine the viewer attention level during playback of the advertisement content and pause playback if the viewer attention level falls below the threshold.
Other inventorsSee patent -
Adaptive embedded advertisement via contextual analysis and perceptual computing
Issued US PCT/US2013/077581
Technologies for adaptively embedding an advertisement into media content via contextual analysis and perceptual computing include a computing device for detecting a location to embed advertising content within media content and retrieving user profile data corresponding to a user of a computing device. Such technologies may also include determining advertising content personalized for the user based on the retrieved user profile and embedding the advertising content personalized for the user…
Technologies for adaptively embedding an advertisement into media content via contextual analysis and perceptual computing include a computing device for detecting a location to embed advertising content within media content and retrieving user profile data corresponding to a user of a computing device. Such technologies may also include determining advertising content personalized for the user based on the retrieved user profile and embedding the advertising content personalized for the user into the media content at the detected location within the media content to generate augmented media content for subsequent display to the user.
Other inventorsSee patent -
Machine learning-based user behavior characterization
Issued US PCT/US2013/060868
This disclosure is directed to machine learning-based user behavior characterization. An example system may comprise a device including a user interface module to present content to a user and to collect user data (e.g., including user biometric data) during the content presentation. The system may also comprise a machine learning module to determine parameters for use in presenting the content based on the user data. For example, the machine learning module may formulate a behavioral model…
This disclosure is directed to machine learning-based user behavior characterization. An example system may comprise a device including a user interface module to present content to a user and to collect user data (e.g., including user biometric data) during the content presentation. The system may also comprise a machine learning module to determine parameters for use in presenting the content based on the user data. For example, the machine learning module may formulate a behavioral model including user states based on the user data, the user states being correlated to an objective (e.g., based on a cost function) and content presentation parameter settings. Employing the behavioral model, the machine learning module may determine a current user state based on the user data, and may select the content presentation parameter settings to bias movement of the current observed user state towards an observed user state associated with the maximized cost function.
Other inventorsSee patent -
Media content including a perceptual property and/or a contextual property
Issued US US2015/0058764A1
Apparatuses, systems, media and/or methods may involve creating content. A property component may be added to a media object to impart one or more of a perceptual property or a contextual property to the media object. The property component may be added responsive to an operation by a user that is independent of a direct access by the user to computer source code. An event corresponding to the property component may be mapped with an action for the media object. The event may be mapped with the…
Apparatuses, systems, media and/or methods may involve creating content. A property component may be added to a media object to impart one or more of a perceptual property or a contextual property to the media object. The property component may be added responsive to an operation by a user that is independent of a direct access by the user to computer source code. An event corresponding to the property component may be mapped with an action for the media object. The event may be mapped with the action responsive to an operation by a user that is independent of a direct access by the user to computer source code. A graphical user interface may be rendered to create the content. In addition, the media object may be modified based on the action in response to the event when content created including the media object is utilized.
Other inventorsSee patent -
Perceptual computing with conversational agent
Issued US US-2013/0212501 A1
-
AUGMENTATION OF TEXTUAL CONTENT WITH A DIGITAL SCENE
Filed US 20160065860
Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may include one or more display devices, a digital content module coupled with the one or more display devices, and an augmentation module coupled with the digital content module and the one or more display devices. The digital content module may be configured to cause a portion of textual content to be rendered on the one or more display devices. The textual content may be…
Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may include one or more display devices, a digital content module coupled with the one or more display devices, and an augmentation module coupled with the digital content module and the one or more display devices. The digital content module may be configured to cause a portion of textual content to be rendered on the one or more display devices. The textual content may be associated with a digital scene that may be utilized to augment the textual content. The augmentation module may be configured to dynamically adapt the digital scene, based at least in part on a real-time video feed, to be rendered on the one or more display devices to augment the textual content. Other embodiments may be described and/or claimed.
Other inventorsSee patent -
TECHNOLOGIES FOR VIEWER ATTENTION AREA ESTIMATION
Filed US 14/298,003
Technologies for viewer attention area estimation include a computing device to capture, by a camera system of the computing device, an image of a viewer of a display of the computing device. The computing device further determines a distance range of the viewer from the computing device, a gaze direction of the viewer based on the captured image and the distance range of the viewer, and an active interaction region of the display based on the viewer's gaze direction and the distance range of…
Technologies for viewer attention area estimation include a computing device to capture, by a camera system of the computing device, an image of a viewer of a display of the computing device. The computing device further determines a distance range of the viewer from the computing device, a gaze direction of the viewer based on the captured image and the distance range of the viewer, and an active interaction region of the display based on the viewer's gaze direction and the distance range of the viewer. The active interaction region is indicative of a region of the display at which the viewer's gaze is directed. The computing device displays content on the display based on the determined active interaction region.
Other inventors -
Projects
-
Android developer (as a hobby)
-
Developing Android application during my free time in various domains – productivity, VR, Contextual, Wearables…
Some of the applications:
Nudnik - recurrent notifications for calendar events.
PhotoAlbum - view your photos as a picture album.
SmartCall - automatically change phone settings to accord with the user needs. -
RealSense Unity Toolkit
-
The SDK Unity Toolkit is a set of scripts, prefabs and other utilities aimed to facilitate the
use of Intel® RealSense™ technology when creating interactive Unity applications.
The toolkit is presented as a Unity Editor extension. Many basic and advanced capabilities
are available directly from the Unity’s Editor user interface (UI). Game developers and
designers can use this toolkit to add interactions with minimal code writing.Other creatorsSee project
Honors & Awards
-
25 On the Rise
SIA - Security Industry Association
Awarded to individuals who demonstrate thought leadership surrounding new technology and reimagine traditional approaches to strategic management
-
Departmental award for innovation
Intel
Created a complex cross-intel POC in the domain of Multimodal interaction (speech, gesture, vision etc.) in robotics.
-
Wearable Tech Israel Hackathon 2014 - 1st Place Winners
Wearable Tech Israel
Using RealSense™ camera, VR glasses and a smart-watch, we have developed an wearable system in which the user can interact with AR overlay using gestures.
https://vimeo.com/94929630
Hackathon website - http://www.wearabletechisrael.com/ -
Departmental award for innovation and risk-taking
Intel - Perceptual Computing Department
Created 2 innovative proof-of-concepts to showcase Intel’s technologies and their potential which resulted in the decision to invest in a specific technology
Languages
-
Hebrew
Native or bilingual proficiency
-
English
Full professional proficiency
-
French
Professional working proficiency
Recommendations received
1 person has recommended Amit
Join now to viewMore activity by Amit
-
Our first public appearance of the robot!! And the crowd love it. 🥳
Our first public appearance of the robot!! And the crowd love it. 🥳
Liked by Amit Moran
-
Simulation is no longer just a tool—it’s a competitive edge. Last week at #NewTech2025, our Head of Platform, Itai Perez, took the stage to present…
Simulation is no longer just a tool—it’s a competitive edge. Last week at #NewTech2025, our Head of Platform, Itai Perez, took the stage to present…
Liked by Amit Moran
-
We’ve officially kicked off our first commercial deliveries in the U.S.! 🇺🇸🤖📦 Today marks a major milestone: RIVR robots are now delivering…
We’ve officially kicked off our first commercial deliveries in the U.S.! 🇺🇸🤖📦 Today marks a major milestone: RIVR robots are now delivering…
Liked by Amit Moran
-
Looking forward to attending Dublin Tech Summit where I will be on a panel discussing the latest robotics developments #DubTechSummit
Looking forward to attending Dublin Tech Summit where I will be on a panel discussing the latest robotics developments #DubTechSummit
Liked by Amit Moran
-
What a week at the @SIXT World Conference 2025 in #Munich! It’s always inspiring to see how #TeamOrange pushes the boundaries – but this year’s…
What a week at the @SIXT World Conference 2025 in #Munich! It’s always inspiring to see how #TeamOrange pushes the boundaries – but this year’s…
Liked by Amit Moran
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Amit Moran
-
Amit Moran
-
Amit Moran
Investment Manager at Excellence Nessuah Investment House Ltd
-
Amit Moran
Marketing Coordinator
-
Amit Moran
3rd year B.Sc Electrical engineering and computer science student at Afeka Tel Aviv Academic College of Engineering
8 others named Amit Moran are on LinkedIn
See others named Amit Moran