Artificial Intelligence (AI) & Machine Learning (ML) Solutions
We leverage artificial intelligence simulation of human intelligence processes by machines, especially computer systems with specific applications of AI including expert systems, natural language processing, speech recognition, and machine vision. We use machine learning ML to achieve allowing different systems to learn from data and make decisions or produce solutions based on inputs and clients’ needs.
Our AI & ML technology as managed services is recognized as being invaluable in these sectors:
- Digital Front Door – Any organization/business Website;
- Electioneering Project;
- Supply Chain Management;
- Field Engineering & Installation;
- Citizen Journalism & General Reporting of Routine and
- Incidental Events
- Education etc.
Digital Front Door – A strategy for any business:
The best way to receive & conduct business with Visitors & Clients at Your Webpage by Virtual Interaction with MRESENCE and AUTOMATION with Artificial Intelligence (AI) & Machine-Learning (ML)-informed and Speech-enabled Conversational Chatbot complete with Automatic Language Translation in Real-Time.
- Greet the visitors or clients;
- Give them Help and Support Services;
- Provide them with updates and Important Information;
- Take them to your showrooms, your factory shop-floors, or your labs in virtual interactions;
- Give them demo and illustrations in Virtual Interactions;
- Direct them to a Webinar or Training Session
- Promote and/or sell them your goods and services
- Build a business relationship with them
We install DFD plug-in/widget on any webpage with just a few lines of code.
An Artificial Intelligence-assisted and Machine Learning-informed Conversational Chatbot that is speech-enabled to support native language chat for more than 30 languages between the parties.
Internet availability is becoming universal. The adoption of mobile digital devices such as smartphones and tablets and laptops is in the billions and fast becoming ubiquitous worldwide. The World Wide Web is the underpinning of daily living.
The corollary, therefore, is that the promotion, propagation, distribution, and provision of notions, goods, and services to the masses should be made through the web pages of the endeavor.
In a world of COVID crises where social distancing and shelter-in-place are required or mandated, digital interactions and communication are essential means of living. The webpage is your Digital Front Door where you encounter your visitors and clients.
You need DFD which is “Digital Front Door” to conduct your business – whatever it may be – in the best way possible.
To win an election you need to get your message through and across to the people and do so in as broad coverage and as frequently as possible.
You and your message need to be in the conversation of the general population. To be effective your message needs to be interesting, current, and topical as the situation evolves.
That means you need to be able to modify the message and its form of delivery to get the optimum effects in addressing the hot topics of the day or even at a certain time of the day.
You want to be able to reach out to everyone with a mobile handset with a targeted message based on a study carried out with Big Data Analytics of captured data on user demographics and usage patterns and how and when the messages are consumed.
Our XR Application combines digital with physical objects in new ways that were never before possible. By creating synergy between the virtual and the physical world, you can develop new ideas faster than ever before. Take advantage of what XR offers and create an experience that will make the user come back for more.
You can do all the above and more with a Reyes-powered XR-enabled Application working in conjunction with a Digital Platform, a cloud-based server.
You can use XR2WIN to deliver an AR production of a 3D image of the election Candidate asking important questions and to poll the electorate to select a topic or issue that is important to them out of a list of multiple choices to get an up-to-date reading of how best to come up with messages on the policy that best address the requirement of hot topics at hand.
You can use XR2WIN to deliver in 360 VR Production of a scene of a rally and the excitement and enthusiasm of the crowds and add AR Production of the candidate in 3D image speaking and delivering a message to appeal to the supporters to donate to the election campaign as a way of crowd-funding.
TMU with MRESENCE caters to industry users as well the general population in various sectors:
TeleCare with MRESENCE™️ – Primary Care, Home-based Medicine, and Mental Healthcare; TeleMeetUp for the aging population; medical tourism.
TMU™️ in TeleCollaboration for installation, troubleshooting, and general maintenance.
Budroid with MRESENCE™️ for Remote Care, Recreation for the elderly; CJ MRESENCE™️ for routine and incidental reporting; eGovernment operation incorporating TMU™️ with MRESENCE™️ services to avoid the need to provide in-person service in completing registration forms, license renewal, paying utility bills – this will greatly reduce the need of long queues of consumers waiting for their turn to have a face-to-face conversation/interaction with a service agent.
Online Distance Learning and Remote Classroom with one-to-one, one-to-many and many-to-one configuration.
Incorporate MRESENCE™️ service features in the mobile communication devices used by the police force and other public safety enforcement agencies.
Better by Design
TMU with MRESENCE is designed to offer better features than Zoom Video Conference Service and Whatsapp. TMU™️ with MRESENCE™️ offers these outstanding features:
- Presence in Mixed Reality features named SWISTWIT (See What I See Touch What I Touch) for pin-pointing and finger-pointing to give greater clarity and accuracy in interaction and discussion.
- Native Language Chat in Text or Speech with Automatic Translation in real-time MRESENCE™️’s Native Language Chat features work with the following:
Arabic, Catalan, Chinese, Czech, Danish, Dutch, English (UK) and (US), Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Portuguese (Brazilian), Russian, Slovak, Spanish, Swedish, Thai and Turkish.
Text to Text Languages Include:
Afrikaans, Albanian, Arabic, Assamese (India), Azeri (Turkish), Belarusian, Bengali (India), Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dari (Afghanistan), Divehi, Dutch, English (UK) + (US) + (AUS), Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati (India), Hausa (Ghana/Africa), Hebrew, Hindi, Hungarian, Igbo (Nigeria), Indonesian, isiXhosa (Zimbabwe), Italian, Japanese, Kannada (India), Kazakh (Kazakhstan), Khmer (Cambodia), Kinyarwanda (Uganda), Kiswahili (Tanzania), Korean, Kurdish (Iran), Lao (Laos), Latvian, Lithuanian, Macedonian (Slavia), Malay, Malayalam (India), Maltese, Marathi (India), Maori, Mongolian, Nepali, Norwegian, Pashto (Afghanistan), Persian, Polish, Portuguese, Portuguese-Brazilian, Punjabi (India), Romanian, Russian, Serbian, Sinhala (Sri Lanka), Slovak, Slovenian, Somali, Spanish, Swedish, Tamil (India), Telugu (India), Thai, Tibetan (Tibet), Turkish, Ukrainian, Urdu (India), Uzbek (Uzbekistan), Vietnamese, Yoruba (Nigeria)] and Zulu.
White-Boarding for drawing on the screen of the Smartphone/Tablet with a finger or drawing on the images of a VR Stream with a finger or in the case of the Web version of TMU™️ with MRESENCE™️, drawing using a mouse.
• Capture in multi-media the entire scenario of an incident or a situation for recording.
• Curation of the multi-media images of the VR Streaming prior to Storage in order to facilitate ease of Retrieval of the images.
Description of TMU™️ with MRESENCE™️
Presence in Mixed Reality
SWISTWIT (“See What I See Touch What I Touch”)
SWISTWIT improves video conference service by enabling greater clarity and accuracy in explanation and demonstration using finger-pointing or hand gestures on the remote party’s video stream in real-time.
The local user views the remote user’s video stream on a smartphone and puts his/her hand behind the smartphone and points or gestures. The rear camera of a smartphone captures the local user’s hand, the hand is detected on the local video stream, and the images of the hand and the remote user’s video stream are merged in real-time for both users to see. The image of the local user’s hand superimposed on the image of the remote user’s environment simulates what the local user would do if in the same physical space and time as the remote user.
MRESENCE™️ service is available in either web version or App version for use with iOS-compliant Smartphone/Tablet or Android-OS-compliant Smartphone/Tablet. A user using MRESENCE™️ in any of the 3 formats can communicate or interact over the Internet with others in a group conference.
- During the group interaction, a user may point the rear camera of the Smartphone (or Tablet) at an object or at a situation and have the entire situation captured in multi-media of the scenario and transmitted in VR (Virtual Reality) streaming to the other users of MRESENCE™️ in the group communication.
- Anyone of the users in the group interaction, while viewing the other user’s VR Streaming, can hold his/her hand behind his/her Smartphone so that the rear camera can capture it, and the image of the hand is merged with the other user’s VR Streaming. The user can use finger-pointing (or pin-pointing or gesturing) on the other user’s VR Streaming while having a voice conversation with the other user to add clarity and accuracy to the visual presentation.
- The image showing the finger-pointing on the VR Streaming is transmitted to all the Smartphones in the group and appears on their screens.
- The users get to see the finger-pointing in real-time while having voice discussion.
- (e) In the case where a user in the group interaction is using the web version of MRESENCE™️ at a computer (which has no rear camera) to view the other user’s VR Streaming described in (b) above, the user may use the mouse of the computer to point/draw on the other user’s VR Streaming. The image of the pointing/drawing made with the mouse is merged with the other user’s VR Streaming.
Cloud-based Managed Mobile Digital Service Provision
• Virtual Interaction & Video Conference
• Conversational AI
• AI&ML-informed speech-enabled Conversational Chatbot
• Conversational Voice AI
• Autonomous Conversations
• Real-time Speech-to-Text Transcription and Translation in
Multiple Languages in Group Conversation
TMU.AI’s teams of highly skilled, experienced resources with expertise for AI, ML, Computer Vision, and Speech2Text Transcription, Text2Text Translation, and Text2Speech technology harness its rich resource of developed intellectual property and its global cloud-based at-scale managed service platforms to continue to create and build high-value technology as services that are readily and affordably available to and easily accessible by the populace to engender significant social edification for social good.