Free Ebook cover AI Fundamentals for Absolute Beginners: Concepts, Use Cases, and Key Terms

AI Fundamentals for Absolute Beginners: Concepts, Use Cases, and Key Terms

New course

11 pages

What AI Is and What It Is Not

Capítulo 1

Estimated reading time: 5 minutes

+ Exercise

Defining AI in Plain Language

Artificial Intelligence (AI) is a broad label for computer systems that perform tasks that usually require human intelligence. In practice, AI is not a single “thing” you can point to inside a computer. It is a set of methods for building software that can recognize patterns, make predictions, generate content, or choose actions based on data.

A useful way to think about AI is: AI is software that learns or adapts behavior from examples or feedback, rather than being fully specified by hand-written rules. Some AI systems learn from large datasets (like many machine learning models). Others learn from trial and error (like reinforcement learning). Some combine learning with hand-crafted logic.

When people say “AI,” they often mean one of these practical capabilities:

  • Prediction: estimating what will happen next (e.g., demand forecasting).
  • Classification: choosing a category (e.g., spam vs. not spam).
  • Recognition: identifying patterns in images, audio, or text (e.g., detecting a defect in a photo).
  • Generation: producing new text, images, code, or audio that resembles learned patterns (e.g., drafting an email).
  • Decision support: recommending actions (e.g., which support ticket to prioritize).

AI is best understood as a tool: it can be powerful in narrow domains, but it does not automatically “understand” the world the way humans do.

What AI Is (Core Characteristics)

AI is pattern-based

Most modern AI systems work by detecting patterns in data. If you show an AI many examples of something, it can learn statistical regularities: which features tend to appear together, which words often follow others, which pixels often form a face, and so on.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

Practical example: If you want an AI to identify whether a photo contains a cat, you provide many labeled images (“cat” / “not cat”). The model learns patterns that correlate with the label. It does not “know” what a cat is in the human sense; it learns what visual patterns tend to appear in images labeled “cat.”

AI is probabilistic, not certain

AI outputs are usually best interpreted as probabilities or best guesses. Even when a system returns a single answer, it is often choosing the most likely option given its training.

Practical example: A medical triage model might output “high risk” based on patterns in symptoms and history. That does not mean the patient definitely has a condition; it means the pattern resembles cases that previously turned out to be high risk.

AI depends on data and assumptions

AI performance depends heavily on the data used to train it and the assumptions built into the model and training process. If the data is incomplete, biased, outdated, or not representative of real usage, the AI can behave poorly.

Practical example: If a customer support classifier is trained mostly on English tickets, it may misclassify tickets written in other languages or in mixed-language slang.

AI is goal-driven within a defined scope

AI systems optimize for a goal you define (explicitly or implicitly). That goal might be “minimize prediction error,” “maximize click-through,” or “produce text that matches patterns in training data.” The system does not automatically adopt human values or common sense unless you build those constraints in.

Practical example: A recommendation system optimized only for watch time may promote content that keeps attention, even if it is repetitive or low quality, because that is what the goal rewards.

What AI Is Not (Common Misunderstandings)

AI is not a human mind

AI does not have consciousness, feelings, desires, or self-awareness. It does not experience emotions or understand meaning the way people do. Even when an AI produces human-like language, that is not proof of human-like understanding; it is evidence that it learned patterns in language.

Practical example: A chatbot may say “I’m sorry you’re going through that.” This can be useful as a conversational style, but it does not mean the system feels empathy.

AI is not always correct or objective

AI can be wrong, sometimes confidently wrong. It can also reflect biases present in training data or introduced by design choices. “The computer said so” is not a guarantee of truth.

Practical example: An AI résumé screener trained on past hiring decisions may learn to replicate past preferences, including unfair patterns, if those patterns exist in historical data.

AI is not magic automation that eliminates work

AI can reduce effort for certain tasks, but it often introduces new work: data preparation, monitoring, quality checks, human review, and process redesign. Many successful AI deployments are “human + AI” systems, not “AI replaces humans.”

Practical example: Using AI to draft customer replies can speed up writing, but teams still need review guidelines, escalation rules, and a way to handle edge cases.

AI is not a single technology

“AI” includes many approaches and model types. Some are simple and interpretable; others are complex and opaque. Treating AI as one monolithic tool leads to unrealistic expectations.

Practical example: A rule-based fraud filter and a deep neural network fraud model are both sometimes called “AI,” but they behave differently, require different maintenance, and provide different kinds of explanations.

AI is not automatically secure or private

AI systems can leak sensitive information if data is handled poorly, if prompts include private details, or if outputs are shared without review. AI also introduces new security concerns, such as prompt injection (tricking a system into revealing or doing something unintended) and data poisoning (corrupting training data).

Practical example: If employees paste confidential customer data into an external AI tool, that data may be stored or used in ways the organization did not intend, depending on the service settings and policies.

AI vs. Traditional Software: A Practical Comparison

Traditional software is typically built with explicit rules: “If X happens, do Y.” AI-based software often learns the mapping from inputs to outputs from examples.

Rule-based approach (traditional)
  • Works well when rules are clear and stable.
  • Easy to test for known cases.
  • Breaks when the real world is messy or when rules become too numerous.
  • // Example: simple rule-based spam filter (illustrative) if (subject.contains("FREE") || body.contains("WINNER")) { markAsSpam(); } else { markAsInbox(); }

    Learning-based approach (AI/ML)
  • Works well when patterns exist but are hard to write as rules.
  • Improves with more and better data.
  • Can fail unexpectedly on new or rare cases.
  • // Example: ML spam classifier (conceptual) probability = model.predict(emailFeatures); if (probability > 0.90) { markAsSpam(); } else { markAsInbox(); }

    In real projects, teams often combine both: rules for safety and compliance, and AI for flexible pattern recognition.

    Understanding “Intelligence” in AI: Narrow vs. General

    Most AI you encounter is narrow AI: it performs well on a specific task within a specific context. It can be impressive, but it does not generalize like a human across unrelated tasks without retraining or redesign.

    • Narrow AI example: An AI that transcribes audio to text.
    • Narrow AI example: An AI that detects cracks in manufactured parts from images.
    • Narrow AI example: An AI that suggests the next word in a sentence.

    General AI (human-level flexible intelligence across domains) is a concept people discuss, but it is not what typical business and consumer AI systems are today. For beginners, it is more useful to focus on what current AI can do reliably: pattern recognition, prediction, and generation within constraints.

    Generative AI: What It Does and What It Doesn’t

    Generative AI systems produce outputs (text, images, code, audio) that resemble patterns in their training data. They can be extremely helpful for drafting, brainstorming, summarizing, and transforming content.

    What generative AI is good at

    • Drafting: creating first versions of emails, reports, or scripts.
    • Rewriting: changing tone, simplifying language, translating.
    • Summarizing: condensing long text into key points.
    • Structuring: turning messy notes into an outline or checklist.
    • Code assistance: generating boilerplate, explaining snippets, suggesting fixes.

    What generative AI is not good at by default

    • Guaranteed factual accuracy: it may produce plausible but incorrect statements.
    • Knowing your private context: unless you provide it, it does not know your internal policies, current inventory, or personal history.
    • Making decisions with accountability: it can suggest, but responsibility remains with the human or organization.
    • Perfect compliance: it can violate constraints if not guided and checked.

    Practical example: If you ask a generative AI to write a product description, it may invent features unless you provide a precise spec and require it to stick to it. This is not “lying” in a human sense; it is pattern completion without built-in truth verification.

    A Step-by-Step Reality Check: Deciding Whether a Task Is a Good Fit for AI

    When someone proposes “Let’s use AI,” you can evaluate the idea with a simple step-by-step checklist. This helps separate realistic AI use from hype.

    Step 1: Define the task as an input-output problem

    Write down what goes in and what should come out.

    • Input examples: an email, an image, a customer profile, sensor readings.
    • Output examples: a category label, a score, a draft response, a recommended action.

    Example: Input: customer support ticket text. Output: suggested category and a draft reply.

    Step 2: Decide whether rules are enough

    If the task can be solved with stable, simple rules, traditional software may be safer and cheaper.

    • Are the rules clear and unlikely to change?
    • Can you list the exceptions?
    • Would you trust a deterministic system more?

    Example: “If order status is ‘delivered’ and customer says ‘not received,’ open a claim.” This is a rule, not necessarily an AI problem.

    Step 3: Check if you have (or can get) suitable data

    For learning-based AI, data is often the main requirement.

    • Do you have enough examples?
    • Are they labeled (for classification) or do you have outcomes (for prediction)?
    • Is the data representative of real usage?
    • Is it legally and ethically usable?

    Example: If you want AI to categorize tickets, you need historical tickets with reliable categories. If categories were inconsistent, the AI will learn inconsistency.

    Step 4: Define what “good” means with measurable criteria

    AI projects fail when success is vague. Choose metrics and thresholds.

    • Accuracy/precision/recall: for classification tasks.
    • Time saved: for drafting and summarization workflows.
    • Error cost: what happens when the AI is wrong?

    Example: “The AI draft should reduce average handle time by 20% while keeping customer satisfaction the same or higher.”

    Step 5: Decide the role of humans (human-in-the-loop)

    Many tasks should keep a human reviewer, especially when errors are costly.

    • Will a human approve outputs before they are used?
    • Which cases must be escalated?
    • How will feedback be captured to improve the system?

    Example: For refunds, AI might suggest an action, but a human approves refunds above a certain amount.

    Step 6: Add guardrails and boundaries

    Guardrails reduce risk and keep the system within scope.

    • Limit what data can be sent to the AI.
    • Require citations or source links when summarizing internal documents.
    • Use templates and structured prompts for consistent outputs.
    • Block disallowed actions (e.g., “never ask for passwords”).

    Example: A support chatbot can be restricted to answering only from an approved knowledge base, and otherwise route to a human.

    Step 7: Plan monitoring and updates

    AI behavior can drift as real-world data changes. Monitoring is part of the system.

    • Track error rates and user complaints.
    • Sample outputs for quality review.
    • Update data, prompts, or models when needed.

    Example: If a new product launches, the AI may start making mistakes until it is updated with new documentation and examples.

    Recognizing AI Hype: Phrases That Often Signal Confusion

    Some statements sound impressive but usually hide unclear thinking. When you hear them, ask for specifics.

    • “The AI will understand our customers.” Ask: what inputs, what outputs, and how will we measure success?
    • “It will learn on its own.” Ask: from what data or feedback, and who validates the learning?
    • “It’s unbiased because it’s automated.” Ask: what data was used, and how are fairness and errors evaluated?
    • “We just need to plug in AI.” Ask: what process changes, guardrails, and monitoring are required?

    Practical Examples: Correct Expectations vs. Incorrect Expectations

    Example 1: AI for meeting notes

    Correct expectation: AI can transcribe audio, summarize key points, and draft action items that a human reviews.

    Incorrect expectation: AI will produce perfect minutes with no mistakes, capture every decision accurately, and know which items are confidential without being told.

    Example 2: AI for customer support chat

    Correct expectation: AI can answer common questions, gather basic details, and route complex issues to humans.

    Incorrect expectation: AI will resolve every issue end-to-end, handle all edge cases, and never produce an unsafe or incorrect instruction.

    Example 3: AI for hiring assistance

    Correct expectation: AI can help structure job descriptions, anonymize résumés, or highlight skills, with careful oversight and fairness checks.

    Incorrect expectation: AI can “objectively” choose the best candidate without human judgment or without examining bias in the data and criteria.

    How to Talk About AI Precisely (Simple Vocabulary for Beginners)

    Clear language prevents confusion. When discussing AI, try to specify:

    • Task: what the system does (classify, summarize, recommend, generate).
    • Input: what data it uses (text, images, tables, logs).
    • Output: what it produces (label, score, draft, decision suggestion).
    • Constraints: what it must not do (privacy limits, compliance rules).
    • Evaluation: how you will measure quality (metrics, human review).
    • Deployment: where it runs and who uses it (internal tool, customer-facing).

    Practical example: Instead of saying “We need AI for sales,” say: “We want a model that predicts which leads are likely to convert in the next 30 days, using CRM activity data, and we will evaluate it by lift over our current prioritization method.”

    When AI Should Not Be Used (Or Should Be Used Carefully)

    Some situations require extra caution or may not be a good fit:

    • High-stakes decisions without oversight: medical, legal, safety-critical actions should not rely on unreviewed AI outputs.
    • When errors are very costly: if a wrong answer causes major harm, you need strong controls and often deterministic checks.
    • When data is scarce or unreliable: AI cannot learn well from poor examples.
    • When requirements demand full explainability: some AI models are hard to interpret; you may need simpler methods or additional explanation tools.
    • When privacy constraints are strict: you may need on-device or private deployments, or avoid sending sensitive data to external services.

    AI is most effective when the task is well-defined, data is available, and the workflow includes verification and accountability.

    Now answer the exercise about the content:

    Which statement best matches how AI differs from traditional rule-based software?

    You are right! Congratulations, now go to the next page

    You missed! Try again.

    AI commonly uses learning from examples to detect patterns and make probabilistic predictions, unlike rule-based software with explicit rules. Its performance depends on data and assumptions and may fail on unfamiliar cases.

    Next chapter

    Data as the Fuel: Examples, Labels, and Quality

    Arrow Right Icon
    Download the app to earn free Certification and listen to the courses in the background, even with the screen off.