A revolutionary AI-powered generative web publishing platform that dynamically transforms your website into an immersive, personalized experience.
This is the process of taking a pre-trained LLM and further training it on a smaller, specific dataset to adapt it for a particular task or to improve its performance. By finetuning, we are adjusting the model’s weights based on our data, making it more tailored to our application’s unique needs. The main challenge of this approach is the resource-intensive nature of collecting and labeling training data, as well as the computational power required to train the model. It also needs professional data science skills to optimize the objective function, making it more suitable for experienced practitioners or teams with dedicated resources.
Allows developers to stream rendered frames and audio from a remote GPU enabled computer (e.g., cloud) to their users through a desktop or mobile web browser, without the need for the users to have special GPU hardware.
A mixed-reality video communications application fully integrated with Microsoft Teams and powered by Azure.
Tokens are the basic units of text or code that an LLM AI uses to process and generate language. Tokens can be characters, words, subwords, or other segments of text or code, depending on the chosen tokenization method or scheme. Tokens are assigned numerical values or identifiers, and are arranged in sequences or vectors, and are fed into or outputted from the model. Tokens are the building blocks of language for the model.
A vector database is a type of database that stores and manages unstructured data, such as text, images, or audio, in vector embeddings (high-dimensional vectors) to make it easy to find and retrieve similar objects quickly. They are becoming popular because of their ability to enable large language models (LLMs) to generate more relevant and coherent text based on an AI plugin.
A state of the art, Enterprise-grade Azure OpenAI Caching solution and AI governance copilot for full transparency, rigorous risk assessment & remediation.
A web-based all-in-one video storage, distribution, collaboration, and account management center. Can be accessed across all operating systems and leading browsers with no product installation required.
A powerful platform for hosting beautiful events and immersive experiences. This feature-rich platform gives administrators the ability to customize access, and for users to view video on demand, live streams, AI-generated captions in multiple languages, AI-created articles, and more.
AI technology that generates content based on learned patterns and data.
Microsoft's cloud service for integrating communication features like chat, voice, and video into applications.
Rendering technique using simplified models for faster display, often used in real-time graphics. Examples of Proxy-rendered videos are seen in Touchcast's Conversational Website and some deployments of Touchcast for Teams.
MaaS (Metaverse as a Service) Studio is a Windows-based production tool that utilizes digital 3D environments to create and configure a virtual event. Users can set up cameras, lighting, screens, branding materials, audience, and other functional components to fully immerse the viewers into a Metaverse experience.
Empowers organizations to deploy immersive experiences and products through the power of Azure. The Metaverse-as-a-Service Enterprise offerings allow brands to integrate with Microsoft Teams and the Cloud for greater impact with their customers, at speed and at scale, through every point of the customer journey.
Toolchain is a plugin for Unreal Engine that allows for creation of compatible MaaS Studio environments for integration into other MaaS applications, such as MaaS Studio and Touchcast for Teams.
A .tcp is the MaaS Studio project file containing camera movements and sequences, as well as lighting, actor/object properties and coordinates, and more. A .tcp file can also be used for exporting and importing camera animations and sequences to other MaaS Studio users.
A .tcp is the MaaS Studio environment file containing the environment, animations, cameras and screens created using Unreal Engine with the Toolchain plugin.
A 3D virtual set inspired by either a venue, licensed 3D venue, or created from the ground up by the Touchcast Design Team. Iconic venues are created in 3D and be used to create realistic, cinematic camera movements within the 3D space using MaaS Studio, or can also be rendered as 'static' sets (meaning no camera tracking movements), that can be used in virtual events or videos.
Touchcast artists collaborate with the client to conceive and build a custom venue from scratch. Requests in addition to the scope of a traditional event stage (interior venue) such as the designing the exterior of a venue, custom fly-in animation, special props to build on stage, and/or special production requests may require additional time.
A commissioned 3D venue where Touchcast artists replicate an actual, physical venue based on client-provided CAD files, video walk-throughs, measurements, images, and/or other reference materials.
When a speaker’s real-life background is removed and they are placed into one of our immersive virtual environments.
A demonstration to show the feasibility of an idea or technology before full-scale implementation.
A demonstration to show the feasibility of an idea or technology before full-scale implementation.
A document outlining project tasks, responsibilities, and deliverables.
The ability to change an AI's Constitution quickly without having to re-train a Large Language Model (LLM).
A set of principles that govern risk calculation for generated content.
A cache system that identifies and fixes issues for efficient data retrieval.
Microsoft's cloud platform offering various tools and resources.
A concept that envisions the future of the internet where artificial intelligence (AI) and machine learning technologies play a central role in generating a substantial portion of online content. In this vision, AI systems autonomously create and curate text, images, videos, and other digital content based on user preferences, historical data, and real-time information.
The hypothesis that conversational machines could cluster the web and together create the 'Web Singularity’, a smart, intelligent web which could reduce or remove the need for supercomputers at every home as it will rely on knowledge assimilation and consistent semantic learning.
When publishing becomes fully autonomous as websites become self-organizing, improving and generate content guided by the publishing team, based on their content and freeing publishers from the constraints of the traditional web. Every webpage is created on the fly for an audience of one.
Pipeline:
A data channel that connects and transforms the input point (production feed from the event's Director) with the output (player on the page) for events requiring live streaming. Pipelines can be created using Showtime.
RTMP:
Real Time Messaging Protocol. This is the protocol event Directors can use to stream from their computers to Touchcast Showtime.
A contract that establishes the fundamental agreements between two parties. MSAs allow vendors and clients to agree on basic terms at the outset of a business relationship, which can drastically speed up the negotiation process for future projects and contracts.
A legally enforceable contract that creates a confidential relationship between a person who has sensitive information and a person who will gain access to that information. A confidential relationship means one or both parties has a duty not to share that information.
Procurement is a strategic process that involves the acquisition of goods and services. Unlike purchasing, it consists of a series of steps that are usually taken by businesses to meet certain needs, such as production, inventory, and sales. It often involves a series of documents like demands and receipts for payment.
Refers to the processes and tools designed and deployed to protect sensitive business information from modification, disruption, destruction, and inspection.
A business document that announces a project, describes it, and solicits bids from qualified contractors to complete it.
This is the release candidate, and this environment is normally a mirror of the production environment. The staging area contains the "next" version of the application and is used for final stress testing and client/manager approvals before going live.
This is the currently released version of the application, accessible to the client/end users. This version preferably does not change except for during scheduled releases.
A sprint is a short, time-boxed period when a scrum team works to complete a set amount of work.
Unreal Engine is a complete suite of creation tools for game development, architectural and automotive visualization, linear film and television content creation, broadcast and live event production, training and simulation, and other real-time applications.
Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs including websites, web apps, web services and mobile apps. MaaS Studio developers current use Visual Studio 2019.
An environment is the digital stages or venues that can be downloaded and configured within MaaS Studio.
Also referred to as the “WYSIWYG (What You See Is What You Get) Configurator”, this is a feature in MaaS Studio that allows users to create UI to trigger custom events. This interactable UI will then appear on a Touchcast for Teams call/experience.
Allows for real objects and real people to seamlessly interact in real-time with computer-generated environments and objects.
Refers to the cameras created or set within MaaS Studio.
A method of transmitting or receiving data (especially video and audio material) over a computer network as a steady, continuous flow, allowing playback to start while the rest of the data is still being received.
The wysiwyg interface and slick design of Fabric Showcase lends itself to being a low-code solution for clients looking to curate content, without requiring a live event component.
The content management system, or, administrative back, end to Showtime. This is where you create the event page, select settings, add creative assets, speaker details, agenda details, etc.
Refers to an interactive and immersive encounter or environment that is enhanced and enriched by the presence and actions of artificial intelligence (AI) technologies. In such experiences, AI algorithms and systems are used to augment and personalize the user's interaction with the content, often in real-time.
A digital representation of a human in a virtual setting. The ‘AI’ in the term ‘AI avatar’ indicates that the avatar is powered by artificial intelligence.
Refers to a digital representation or model of an individual, incorporating various data points and attributes related to that person.
The art and craft of making motion pictures by capturing a story visually. It comprises all on-screen visual elements, including lighting, framing, composition, camera motion, camera angles, lens choices, depth of field, zoom, focus, color, exposure, and filtration.
Wrapped in LED screens from all directions, MaaS Cube has all the intimacy of a physical venue with the infinite flexibility of a mixed reality space.
An editable and/or movable object within an environment. Ex. a podium, the screens, a car model.
Empowers organizations to deploy immersive experiences and products through the power of Azure. The Metaverse-as-a-Service Enterprise offerings allow brands to integrate with Microsoft Teams and the Cloud for greater impact with their customers, at speed and at scale, through every point of the customer journey.
A line-by-line agenda of a live event, or an extremely detailed sequence of activities within a given event
A method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.
An AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information.
A set of two related pieces of information: a question and its corresponding answer. These pairs are commonly used in various contexts, including natural language processing and machine learning, for tasks like question-answering systems and chatbots. In these applications, a dataset of question and answer pairs is often used to train models to understand and generate human-like responses to questions posed in natural language. The question serves as an input, and the model generates the appropriate answer based on its training data and algorithms.
An acronym for "what you see is what you get." WYSIWYG interfaces enable users to use software's functions without having to write explicit code, allowing editors to configure, use and manipulate content more easily and effectively. These interfaces usually include a simple drag-and-drop interface for setup.
A website that interacts with users through natural language, often using chatbots or AI interfaces. Touchcast Conversational Websites are the next-generation of chatbots.
AI-generated content which combines various media forms such as text, images, audio, and video for a rich and customisable user experience.
A powerful AI model trained on extensive text data for human-like text generation and understanding.
A content management system enhanced with generative AI to create and manage content automatically.
An macOS-based application open for anyone to seamlessly create immersive, generative experiences. From rich presentations to intriguing podcasts to captivating events in any language, Genything unleashes the power of GenAI.