Artificial Intelligence (AI)
The simulation of human intelligence by machines, enabling them to perform tasks like reasoning, learning, problem-solving, and understanding language.
Glossary entries sourced from the uploaded workbook. Each card shows the term, acronym when present, category, level, and definition. Entries are grouped by category for faster scanning.
Foundational terms covering models, training, data, and core machine learning concepts.
The simulation of human intelligence by machines, enabling them to perform tasks like reasoning, learning, problem-solving, and understanding language.
A subset of AI where systems learn from data to improve their performance over time without being explicitly programmed for each task.
A branch of machine learning that uses multi-layered neural networks to model complex patterns in large datasets, powering applications like image recognition and natural language processing.
A computing system loosely modeled on the human brain, composed of interconnected nodes (neurons) that process information in layers to recognize patterns and make decisions.
A type of AI model trained on massive text datasets to understand and generate human language. Examples include Claude, GPT-4, and Gemini.
A large AI model trained on broad data that can be adapted for a wide range of downstream tasks, serving as a general-purpose base.
A set of rules or instructions that a computer follows to complete a task or solve a problem. In AI, algorithms define how a model learns from data.
The dataset used to teach an AI model. The quality and diversity of training data directly impacts the model's performance and fairness.
In AI, a model is the output of a machine learning algorithm after it has been trained on data — essentially the learned rules used to make predictions or generate outputs.
The process of exposing an AI model to large amounts of data so it can learn patterns, relationships, and rules. Training adjusts the model's internal parameters until it can reliably perform its intended task.
The process of using a trained AI model to generate outputs or predictions on new, unseen inputs. Inference is what happens when you interact with an AI tool.
Concepts related to prompts, generation behavior, retrieval, and common interaction patterns.
A category of AI that can create new content — text, images, audio, code, or video — based on patterns learned from training data.
The input or instruction you provide to an AI model to guide its response. Crafting effective prompts is a key skill for getting useful AI outputs.
The practice of designing and refining prompts to improve the quality, accuracy, or relevance of AI-generated outputs.
The maximum amount of text (measured in tokens) an AI model can process in a single interaction. Content outside this window is not considered by the model.
The basic unit of text that AI language models process. A token is roughly 3–4 characters or about 0.75 words. Models have limits on how many tokens they can handle at once.
A parameter that controls the randomness of an AI model's output. Higher temperature produces more creative or varied responses; lower temperature produces more predictable, focused outputs.
When an AI model generates information that sounds plausible but is factually incorrect or fabricated. Always verify AI-generated facts against reliable sources.
Instructions given to an AI model before user interaction begins, used to set its behavior, persona, or constraints. Often used by developers and platform administrators.
A prompting technique where you provide a few examples within your prompt to guide the model's output format or style.
Asking an AI model to complete a task without providing any examples, relying entirely on the model's pre-trained knowledge.
A technique that combines an AI model with an external knowledge source, allowing it to retrieve relevant information before generating a response — improving accuracy on specific topics.
A mode available in some AI tools (including ChatGPT and Gemini) that actively searches the web and follows multiple sources before generating a response — rather than relying solely on training data. Deep Research is slower than standard queries but produces more comprehensive, citable outputs on complex topics.
The practice of asking an AI model to write or improve a prompt for you before running the actual task. Instead of crafting a complex prompt from scratch, you describe your goal and ask the model to generate the best query — then use or refine that query to get your output. Useful when you know what you want but not how to ask for it.
A reasoning approach where an AI model works through a problem step by step before producing a final answer, rather than jumping straight to a conclusion. Chain of thought reasoning improves accuracy on complex tasks and is often visible in tools like Claude and ChatGPT as the model thinks out loud before responding.
Terms covering interfaces, assistants, agentic behavior, multimodal systems, and connectors.
A software application designed to simulate conversation with humans, often powered by AI. Examples include Claude, ChatGPT, and Microsoft Copilot.
A term used (especially by Microsoft) for AI assistants embedded in productivity tools to help with tasks like writing, coding, or data analysis.
Application Programming Interface. In AI, an API allows software systems to communicate with an AI model programmatically, enabling integration into products and workflows.
An AI system that can autonomously take actions — such as browsing the web, writing code, or managing files — to complete multi-step tasks on your behalf.
An AI model capable of processing and generating multiple types of content, such as text, images, audio, and video, within the same system.
A type of generative AI that creates images from written descriptions. Examples include DALL-E, Midjourney, and Adobe Firefly.
A numerical representation of text or data that captures semantic meaning, allowing AI models to understand relationships and similarity between concepts.
A type of database optimized for storing and searching embeddings, commonly used in RAG systems to find relevant content quickly.
A personalized version of ChatGPT that you configure with specific instructions, a defined persona, and uploaded reference materials. Custom GPTs allow you to create a reusable AI assistant tailored to a specific role, workflow, or use case — without re-entering context each session. The equivalent in Claude is a Project.
A Google AI tool that lets you upload documents, PDFs, and other sources and then ask questions or generate summaries grounded in that specific content. Useful for synthesizing research, meeting transcripts, or reference material without relying on the model's general training data.
An integration that links an AI tool to an external system — such as your email, calendar, file storage, or CRM — allowing the model to access and act on real data from those sources. Connectors extend what AI can do beyond a single conversation, enabling more personalized and context-aware outputs.
An AI system that can independently plan and execute a sequence of actions to complete a goal — such as searching the web, drafting a document, sending a message, or running code — with minimal step-by-step human instruction. Distinct from a standard chatbot, which responds to one prompt at a time. Agentic AI represents the next major evolution in how AI is used at work.
Terms related to governance, risk, privacy, safeguards, bias, and organizational responsibility.
Systematic errors or unfair outcomes in AI outputs caused by biased training data or flawed model design, which can reflect or amplify societal inequalities.
The degree to which an AI model's decision-making process can be understood and interpreted by humans. Also called Explainable AI (XAI).
Confidence is how certain an AI model appears to be in its output; accuracy is whether that output is actually correct. A model can be highly confident and still be wrong — which is why human review of AI outputs remains essential.
A framework for designing, developing, and deploying AI systems in ways that are ethical, transparent, fair, and aligned with human values.
The policies, processes, and oversight structures that organizations put in place to ensure AI is used appropriately, safely, and in compliance with regulations.
Constraints or safety mechanisms built into AI systems to prevent harmful, inappropriate, or off-policy outputs.
A design approach where humans are involved in reviewing or approving AI outputs before they are acted upon, ensuring accountability and reducing risk.
The research field focused on ensuring AI systems behave in ways that are consistent with human values, intentions, and goals — especially as models become more powerful.
The principle that personal and sensitive information used in AI systems must be collected, stored, and processed in compliance with privacy laws and ethical standards.
The use of AI tools that have not been approved by your organization — including free consumer versions of licensed tools, or tools outside sanctioned platforms. Shadow AI creates data privacy and security risks because organizational settings and protections cannot be verified. When in doubt, use only company-provided accounts.
Any data that can be used to identify a specific individual — such as names, email addresses, employee IDs, salary details, or performance records. PII must never be entered into AI tools that are not covered by your organization's approved and governed accounts. If unsure, check with Legal before proceeding.
More technical concepts used to describe model architectures, tuning, hardware, and performance characteristics.
A field of AI focused on enabling computers to understand, interpret, and generate human language in a meaningful way.
A subset of NLP focused specifically on the machine's ability to comprehend the meaning and intent behind human language.
A field of AI that enables machines to interpret and understand visual information from images and video, similar to how humans use their eyes.
A neural network architecture that is the foundation of most modern LLMs. It uses a mechanism called 'attention' to weigh the importance of different parts of an input.
The process of further training a pre-trained foundation model on a smaller, task-specific dataset to improve its performance on a particular use case.
The internal variables of an AI model learned during training. Models are often described by their parameter count (e.g., billions of parameters), which is a rough indicator of capability.
When an AI model learns the training data too precisely — including its noise and anomalies — and performs poorly on new, unseen data.
A training technique where human evaluators rate AI outputs, and those ratings are used to fine-tune the model toward more helpful, safe, and accurate responses.
Graphics Processing Unit. Originally designed for rendering graphics, GPUs are now widely used to accelerate AI model training and inference due to their ability to perform many calculations simultaneously.
The time delay between sending a request to an AI model and receiving a response. Low latency is important for real-time applications.
Operational concepts for adoption, workplace usage, reusable prompts, projects, and organizational execution.
The ability to understand, evaluate, and effectively use AI tools in your day-to-day work — regardless of technical background.
Using technology, including AI, to perform tasks with minimal human intervention, freeing up time for higher-value work.
Using AI to enhance human capabilities rather than replace them — helping people work faster, smarter, and more creatively.
The process of embedding AI tools into existing business processes to improve efficiency and output quality.
A measure of how deeply and effectively an organization has adopted AI — spanning awareness, experimentation, integration, and optimization stages.
A specific scenario or task where AI can be applied to solve a problem or improve a process. Identifying strong use cases is the first step to responsible AI adoption.
The structured approach to transitioning individuals, teams, and organizations to adopt new tools or ways of working — including AI.
A curated collection of effective, reusable prompts designed for specific tasks or workflows, helping teams get consistent, high-quality AI outputs.
A lightweight text formatting format that uses plain symbols (like # for headings or ** for bold) to structure content. Markdown files are widely used for documentation, SOPs, and knowledge bases — and are easily readable by both humans and AI tools.
A feature in Claude that creates a persistent, organized workspace where you can store custom instructions, upload reference documents, and maintain conversation history — so Claude retains context across sessions for a specific use case or team need.
A feature in ChatGPT that groups related conversations and files together, allowing the model to reference shared context across chats within that project space.
A pre-configured instruction set or workflow loaded into Claude to guide how it approaches a specific type of task — such as creating documents, analyzing data, or following a defined process. Skills help standardize AI outputs across a team.
A settings feature in ChatGPT (under Settings > Personalization) that lets you tell the model who you are, how you work, and what kinds of outputs you prefer — so it applies that context automatically in every conversation. The equivalent in Claude is the Project instructions field. Setting this up once significantly improves the relevance of every response.
Persona, Task, Context, Output — a four-part prompting framework used at Monotype to structure effective AI prompts. Persona defines the role you want AI to adopt; Task is what you want it to do; Context provides relevant background; Output specifies the format or length of the response. Using PTCO consistently improves the quality and repeatability of AI outputs.