Claude Certified Architect Foundations: The Complete Guide - Part 1
CCA-F exam prep - domains, code examples, and practice questions. All in one place.
The Claude Certified Architect Foundations exam was recently launched. It is Anthropic’s first official technical certification.
I have been building agentic systems with Claude for a long time. When I heard about the exam, I wanted to understand what it actually tests.
So I went through the entire exam guide, also took some mock tests to understand what they covered.
Below are the sections you need to understand to become a Claude Certified Architect.
Agentic Architecture & Orchestration
Tool Design & MCP Integration
Claude Code Configuration
Prompt Engineering & Structured Output
Context & Reliability
Each section is detailed enough to be its own blog. We will cover the first three in this part. The remaining sections along with how to register will be in Part 2.
I have also put together a set of practice questions with answers on GitHub. I will keep adding more as I come across them.
1. Agentic Architecture & Orchestration
This section carries 27% of your exam score. It covers Claude Agent SDK, the library you use to build production grade agents. Below are the key Agent SDK concepts, that turn a normal LLM into an agent.
stop_reason
Conversation History
Loop Termination
Multi-Agent Systems
Task Decomposition
Hooks
Session Management
A normal LLM call is simple. You send a message, Claude replies, done. An agent is different. It is multiple LLM calls chained together until the task is fully complete. It includes tool calling, getting results, and deciding what to do next.
Let us start with stop_reason. It is the core of every agent loop.
stop_reason
Every Claude API response has a field called stop_reason. It tells you why Claude stopped. This is what decides whether your loop continues or ends. stop_reason has multiple values. The three you need to know for the exam are,
If stop_reason == "tool_use" then it means Claude wants to call a tool. Then you execute the tool, append the result, and loop again.
If stop_reason == "end_turn" then Claude is done. We can return the response to the user. Loop ends.
If stop_reason == "max_tokens" then the response got cut off. This happens when Claude hits the max_tokens limit you set in the API call.
If the response genuinely needs more tokens, increase max_tokens. If the response is too verbose, then restructure your prompt to produce a shorter output. Never return a cut-off response as it is incomplete.
Here is the stop_reason with all three options, explained in code.
import anthropic
import json
client = anthropic.Anthropic()
class Agent:
def __init__(self, tools):
self.tools = tools
def run(self, task: str) -> str:
# Every conversation starts with the user task
messages = [{"role": "user", "content": task}]
while True:
# Claude does not remember previous call,
# So we pass the full conversation history every time
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=self.tools,
messages=messages
)
# Check the stop_reason to ensure the Task is complete
if response.stop_reason == "end_turn":
return extract_text(response.content)
# If stop_reason is tool_use then execute the tool
elif response.stop_reason == "tool_use":
# Step 1 — save what Claude said, including which tool it wants to call
messages.append({
"role": "assistant",
"content": response.content
})
for block in response.content:
if block.type == "tool_use":
# Step 2 — we execute the tool, not Claude
result = tool_map[block.name](**block.input)
# Step 3 — send the tool result back so Claude can continue
# tool_use_id tells Claude which tool call this result belongs to
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": block.id,
"content": json.dumps(result)
}]
})
# Claude now sees the result and decides what to do next
else:
# stop_reason == "max_tokens" — response was cut off, never return incomplete results
raise RuntimeError(f"Unexpected: {response.stop_reason}")
The loop runs until Claude returns end_turn.
Handling stop_reason is one of the most important concepts in agentic workflows. Learn more in the official docs.
Now that you understand stop_reason, let us look at how Claude keeps track of everything that happens in the loop.
Conversation History
In agentic workflows, Claude has no memory between API calls. Every time you call the API, you have to pass the full conversation history. That is how Claude knows what happened before.
The
messagesarray holds that history. User messages, Claude responses, and tool results all get appended here until the task is done.
So how does the messages array grow? Every time stop_reason == "tool_use", you append two things.
First, you append the response Claude just sent. This contains the tool name Claude wants to call and the input values it decided to pass.
messages.append({
"role": "assistant",
"content": response.content # text block + tool_use block
})Then you execute the tool and append the result back. The tool_use_id in the result must match the one from the response we appended previously. This tells Claude that this result is for the tool call it requested.
Now you loop again. Claude reads everything in the messages array and decides what to do next.
for block in response.content:
if block.type == "tool_use":
result = tool_map[block.name](**block.input)
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": block.id,
"content": json.dumps(result)
}]
})Do not add text immediately after tool results. Send tool results directly without additional text
Loop Termination
An agent loop can terminate for different reasons. We need to identify each termination reason and handle it. If we do not, the agent either runs forever, returns incomplete results, or takes actions it should not.
Here are the three ways we handle it.
stop_reason == "end_turn" - Claude is done. Return the response to the user and exit the loop.
result.get("requires_human") == True - escalate to a human and exit the loop.
MAX_ITERATIONS = 10 - the loop hit the safety cap. Raise an error and exit.
In all three cases, we handle the termination and never return incomplete results. Here is how all three work together in code.
MAX_ITERATIONS = 10
def escalate_to_human(result: dict) -> None:
# Notify your human review system — ticket, Slack, email etc.
print(f"Escalating to human: {result}")
def extract_text(content) -> str:
# Extract the text block from Claude's response
for block in content:
if hasattr(block, "text"):
return block.text
class Agent:
def __init__(self, tools):
self.tools = tools
def run(self, task: str) -> str:
# Every conversation starts with the user task
messages = [{"role": "user", "content": task}]
for i in range(MAX_ITERATIONS):
# Claude does not remember previous calls
# So we pass the full conversation history every time
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=self.tools,
messages=messages
)
# Claude is done — return the response to the user
if response.stop_reason == "end_turn":
return extract_text(response.content)
elif response.stop_reason == "tool_use":
# Append Claude's response to history
messages.append({
"role": "assistant",
"content": response.content
})
for block in response.content:
if block.type == "tool_use":
# we execute the tool
result = tool_map[block.name](**block.input)
# Tool requires human intervention — escalate and exit the loop
if result.get("requires_human"):
escalate_to_human(result)
return
# Append tool result so Claude can continue
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": block.id,
"content": json.dumps(result)
}]
})
# Max iterations reached — exit with error, never return incomplete results
raise RuntimeError("Agent hit max iterations without completing")Multi-Agent Systems
You now know how a single agent works. But some tasks are too complex for one agent to handle alone.
That is where multi-agent systems come in. You break the work across multiple specialized agents. Each one is the same Agent class we defined in the previous sections, just with different tools and a different responsibility.
search_agent = Agent(tools=[web_search_tool])
analysis_agent = Agent(tools=[analysis_tool])
report_agent = Agent(tools=[report_tool])Anthropic uses a pattern called hub-and-spoke to coordinate agents. Think of it like this. One coordinator receives the task, breaks it down, and delegates to specialized agents. Each agent does its part and reports back. The coordinator collects everything and produces the final response.
Now, when do you run agents in parallel and when in sequence?
If the tasks are independent, run them in parallel. If one task needs the output of another, run them in sequence.
import asyncio
# Step 1 — Independent subagents, run in parallel
search_result, document_result = await asyncio.gather(
search_agent.run("Search web for competitors"),
analysis_agent.run("Analyze our internal documents")
)
# Step 2 — This subagent depends on Step 1, runs after
report_result = report_agent.run("Generate report", search_result, document_result)
# Step 3 — Coordinator collects all results and produces the final response
final_result = coordinator_agent.run("Finalize", search_result, document_result, report_result)Every subagent starts with a fresh context. It does not have access to the coordinator's conversation history. The only way to pass context to a subagent is through its prompt. So be specific and detailed in what you pass.
# Too vague — subagent does not know enough to do its job well
search_agent.run("Search web for competitors.")
# Right — give the subagent exactly what it needs
search_agent.run("Search web for competitors in the B2B SaaS space. Focus on pricing and features.")Agents never talk directly to the user. They always report back to the coordinator. The coordinator owns the task from start to finish.
Task Decomposition
Task decomposition is an important part of building multi-agent systems. When you build one, remember that your coordinator agent should never delegate a vague task directly to a subagent.
Take this for example. Your coordinator receives “Research the competitive landscape for our project management tool.” If it passes this directly to a subagent, the subagent does not know where to start. You will get different results every run.
Break it down first.
Search Google for “Notion vs Linear pricing plans 2026”
Extract pricing tiers, feature limits, and target user segments for both
Generate a comparison table showing where your product has an advantage
Each subtask goes to one specialized agent. Each agent has one job and knows exactly what to return.
The agents are not the problem. Vague instructions are.
Hooks
When you build an agent, some actions need to be controlled programmatically. You cannot rely on Claude to always follow prompt instructions. That is where hooks come in.
Hooks are callback functions you register in the Claude Agent SDK. They fire automatically at specific points during agent execution. No prompts involved.
Two hooks you need to know for the exam,
PreToolUse :
It fires before a tool executes. Since it runs before, it can block the action. Use it when you want to stop Claude from doing something like blocking rm -rf commands, preventing writes to .env, or requiring human approval before processing a refund above $500.
PostToolUse :
PostToolUse fires after a tool executes. It cannot block anything since the action already happened. Use it for things like auto-formatting a file after Claude edits it, writing to an audit log after a deletion, or running a linter after every code change.
Here is a real example. Your agent has a process_refund tool. Any refund above $500 needs human approval before it goes through.
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient, HookMatcher
async def block_large_refunds(input_data, tool_use_id, context):
tool_name = input_data["tool_name"]
tool_input = input_data["tool_input"]
# Only intercept the process_refund tool
if tool_name != "process_refund":
return {} # allow all other tools to proceed
# Block refunds above $500 — requires human approval
if tool_input.get("amount", 0) > 500:
return {
"hookSpecificOutput": {
"hookEventName": input_data["hook_event_name"],
# deny stops the tool from executing
"permissionDecision": "deny",
# this reason is sent back to Claude so it knows why it was blocked
"permissionDecisionReason": "Refunds above $500 require human approval"
}
}
# Allow refunds under $500 to proceed normally
return {}
options = ClaudeAgentOptions(
hooks={
# PreToolUse fires before the tool executes
"PreToolUse": [
# matcher tells the SDK which tool to intercept
HookMatcher(matcher="process_refund", hooks=[block_large_refunds])
]
}
)
async with ClaudeSDKClient(options=options) as client:
await client.query("Process a $600 refund for order ORD-999")
async for message in client.receive_response():
print(message)The key difference between hooks and prompt instructions is reliability. A prompt instruction like “never process refunds above $500” is not guaranteed. Claude may follow it or not. A hook always fires. If you want something enforced without exception, put it in a hook.
Session Management
A session contains the full conversation history. The SDK saves it to disk automatically so you can come back to it later.
Here is what you need to know.
Session Resumption
Session Isolation
Session Forking
Session Resumption
Imagine your agent is halfway through a long task and the server crashes. Without session resumption, it starts from scratch. With it, you pass the session ID and the agent continues from where it stopped.
Here is how you implement it.
from claude_agent_sdk import ClaudeAgentOptions, query, ResultMessage
# Start a session and save the session ID
session_id = None
async for message in query("Analyze the refund flow in our codebase"):
if isinstance(message, ResultMessage):
session_id = message.session_id
# Resume later — agent has full context from before
async for message in query(
prompt="Now fix the bug you found",
options=ClaudeAgentOptions(
resume=session_id,
allowed_tools=["Read", "Edit", "Write", "Glob", "Grep"],
),
):
if isinstance(message, ResultMessage) and message.subtype == "success":
print(message.result)Session Isolation
Each user or task gets its own session. If two users share the same session, Claude will mix up their context and produce wrong results. Always create a new session for each user or task.
Session Forking
Sometimes you want to try a different approach from where you are, without affecting your current session. For example, your agent has analyzed a codebase and you want to try both a REST and a GraphQL implementation. Instead of starting over, you fork the session. Both forks start from the same point. The original stays untouched.
from claude_agent_sdk import ClaudeAgentOptions, query
# Fork — original session stays unchanged
options = ClaudeAgentOptions(
resume=session_id,
fork_session=True # new session ID, same history
)
async for message in query("Try a GraphQL approach instead", options=options):
print(message)To learn more about working with sessions, check out the Claude Agent SDK session guide.
2. Tool Design & MCP Integration
MCP stands for Model Context Protocol. It is an open standard for connecting your agent to external services. For example, your agent can query a database, fetch GitHub issues, or send a Slack message through an MCP server.
If you are new to MCP, check out our complete MCP Server Guide that covers everything from scratch.
Below are the key concepts.
MCP Components
Tool Descriptions
Structured Error Responses
Tool Distribution
MCP Server Configuration
MCP Components
An MCP server consists of three things.
Tools are functions exposed by the MCP server. When Claude calls a tool, the MCP server executes that function. Claude decides when to call the tool and what input to send.
Resources are read only data exposed by the MCP server. They give Claude extra context for the task. For example, a resource can be an API spec, database schema, product catalog, or project document.
Prompts are reusable instructions exposed by the MCP server. They tell Claude how to handle a repeated task. In supported clients, prompts can also be triggered as slash commands.
Here is a simple MCP server example. This one connects to PostgreSQL and shows all three components.
from mcp.server.fastmcp import FastMCP
import psycopg2
import os
mcp = FastMCP("DatabaseServer")
# Connect to PostgreSQL — use environment variables for credentials
conn = psycopg2.connect(
host=os.getenv("DB_HOST"),
database=os.getenv("DB_NAME"),
user=os.getenv("DB_USER"),
password=os.getenv("DB_PASSWORD")
)
# Tool — model calls this to query the database
@mcp.tool()
def get_order(order_id: str) -> dict:
"""Look up an order by order ID from the database."""
cursor = conn.cursor()
cursor.execute("SELECT id, status, item FROM orders WHERE id = %s", (order_id,))
row = cursor.fetchone()
if not row:
return {"error": "Order not found"}
return {"id": row[0], "status": row[1], "item": row[2]}
# Resource — exposes the full product catalog from the database
@mcp.resource("data://product-catalog")
def get_product_catalog() -> str:
"""Returns all products from the database."""
cursor = conn.cursor()
cursor.execute("SELECT name, price FROM products")
rows = cursor.fetchall()
return "\n".join([f"{row[0]} - ${row[1]}" for row in rows])
# Prompt — can appear as a slash command in supported clients
@mcp.prompt()
def review_order_prompt() -> str:
"""Template for reviewing an order status."""
return """You are an order review specialist.
When reviewing an order:
- Check the current status
- Identify any issues
- Suggest next steps"""
if __name__ == "__main__":
mcp.run()Once this MCP server is running and connected to your agent, you can query it directly.
“What is the status of order ORD-999?”
The agent calls the get_order tool, queries the database in real time, and returns the result.
Tool Descriptions
When Claude has multiple tools, it uses the tool description to decide which one to call. If the description is vague, Claude may pick the wrong tool.
A good tool description should clearly say three things.
What the tool does
What input it expects
When to use it and when not to
# Bad — too vague, Claude cannot differentiate
@mcp.tool()
def get_order(order_id: str) -> dict:
"""Retrieves order details."""
...
# Good — clear boundaries, Claude knows exactly when to use it
@mcp.tool()
def get_order(order_id: str) -> dict:
"""Look up an order by order ID.
Use this when you need order status, items, and shipping details.
Do not use this for customer account lookups."""
...Structured Error Responses
When a tool fails, do not return a generic error. Claude needs to know what went wrong so it can decide what to do next.
In production, return more than just an error. Tell Claude what went wrong, whether it can retry, and what to do next.
Here are a few common categories.
Transient - If the external service is down or timed out, we return the error as transient. It means the failure is temporary and usually safe to retry.
Validation - If the input is missing or invalid, we return the error as validation. It means Claude should fix the input before calling the tool again.
Business - If the tool runs, but the action is not allowed by your system rules, we return the error as business. It means there is no point retrying the same request.
Permission - If the user or agent does not have access to perform the action, we return the error as permission. It means access should be granted first before retrying.
@mcp.tool()
def process_refund(order_id: str, amount: float) -> dict:
"""Process a refund for a given order."""
# validation — bad input, no point retrying
if not order_id:
return {"error": True, "errorCategory": "validation", "isRetryable": False, "message": "order_id is required"}
# business — order is not eligible for refund, refund window has expired
if order_status == "delivered_over_30_days":
return {"error": True, "errorCategory": "business", "isRetryable": False, "message": "Refund window has expired. Orders can only be refunded within 30 days of delivery."}
# transient — service down, safe to retry
if not payment_service.is_available():
return {"error": True, "errorCategory": "transient", "isRetryable": True, "message": "Payment service unavailable. Try again."}
return {"success": True, "refundId": "REF-001"}Never return a generic error like {"error": "Operation failed"}. The more context you give Claude, the better it recovers.
Tool Distribution
Give each agent only the tools it really needs. Too many tools can confuse Claude. It may call the wrong tool or take the wrong action.
# Bad — agent has access to tools it should never use
research_agent = Agent(tools=[
web_search_tool,
process_refund_tool, # research agent should never process refunds
delete_record_tool, # research agent should never delete records
synthesis_tool
])
# Good — agent only gets what it needs
research_agent = Agent(tools=[
web_search_tool,
synthesis_tool
])Use allowed_tools and disallowed_tools in the Claude Agent SDK to control which tools each agent can access.
options = ClaudeAgentOptions(
mcp_servers={"orders": order_server},
allowed_tools=[
"mcp__orders__get_order", # pre-approve read operations
"mcp__orders__get_customer"
],
disallowed_tools=[
"mcp__orders__delete_record", # block delete operations
"mcp__orders__update_record" # block write operations
]
)To learn more, check the official SDK reference.
MCP Server Configuration
When you add an MCP server to your project, you need to decide where to configure it. There are two scopes.
Project level — .mcp.json in your repo. It can be shared with the whole team via version control.
User level — ~/.claude.json. It is personal and experimental. It is not shared.
// .mcp.json — project level, committed to the repo
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["@anthropic-ai/mcp-server-github"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}" // never hardcode secrets
}
},
"database": {
"command": "python",
"args": ["./mcp_servers/database_server.py"],
"env": {
"DB_URL": "${DATABASE_URL}"
}
}
}
}Use existing community MCP servers for standard integrations like GitHub, Slack, and Jira. Build custom servers only for internal or team-specific workflows.
Check the official docs for MCP and Agent SDK integration.
3. Claude Code Configuration & Workflows
Claude Code deserves its own blog. We have covered the important concepts that will be useful for your daily development as well as for the exam.
CLAUDE.md
Custom Commands and Skills
Path-specific Rules
Plan Mode vs Direct Execution
Headless Mode
Hooks
Built-in Tools
CLAUDE.md
CLAUDE.md is the first file Claude reads when it starts a session. It tells Claude the project conventions and rules.
It supports three levels:
User level — ~/.claude/CLAUDE.md. Applies to all your projects.
Project level — CLAUDE.md in the repo root. Applies to the whole project. Commit this to Git.
Directory level — CLAUDE.md inside a subdirectory. Applies only when Claude is working in that folder.
repo/
├── CLAUDE.md # applies to everything
├── frontend/
│ └── CLAUDE.md # applies only in /frontend
└── backend/
└── CLAUDE.md # applies only in /backendIt can be placed directly in the repo root or inside the .claude/ directory. Both work the same way.
CLAUDE.md is advisory. Claude reads it and tries to follow it. But it is not guaranteed. Claude may not always follow it. If you have a constraint that must always run without exception, put it in a hook. Hooks fire programmatically and always run regardless of what Claude decides.
Custom Commands and Skills
Skills give domain-specific expertise to Claude. You package your team’s workflows, standards, and best practices into a SKILL.md file. Claude picks them up automatically when the task matches, or you call them directly with a slash command.
Skills live in two places:
.claude/skills/ — project level. Shared with the team via Git.
~/.claude/skills/ — user level. Personal, not shared.
Each skill lives in its own subdirectory. For example, a review-security skill would be at .claude/skills/review-security/SKILL.md.
<!-- .claude/skills/review-security/SKILL.md -->
---
name: review-security
description: Run a security audit on the current file
allowed-tools: Read, Grep
context: fork
argument-hint: [file-path to review]
---
Review this file for SQL injection, XSS, and hardcoded secrets.
Output findings as JSON with severity, line number, and recommendation.Three fields to know:
allowed-tools - It controls which tools the skill can use. For example, if you set
Read, Grep, the skill can only read files and search. It cannot write or run commands.context: fork - It runs the skill in a separate session. Whatever Claude does in that session stays there. Use this when you want Claude to explore something without affecting your main session.
argument-hint - It shows a hint when someone types the slash command. It helps the user to understand what input to provide.
All fields are optional. But always include
descriptionso Claude knows when to load the skill automatically. Check the full frontmatter reference for all available fields.
Custom Commands were the older way to define slash commands in Claude Code. They lived in .claude/commands/*.md. Skills have now replaced them. If you are starting fresh, use Skills.
You can also find community-built skills at skills.sh.
Path-specific Rules
Rules live in .claude/rules/ as markdown files. When you want path-specific rules for your codebase, you add a paths field in your rules.
<!-- .claude/rules/frontend-rules.md -->
---
paths:
- "frontend/**/*.ts"
- "frontend/**/*.tsx"
---
Always use React functional components.
Never use class components.
Use Tailwind for styling. Never use inline styles.<!-- .claude/rules/backend-rules.md -->
---
paths:
- "backend/**/*.py"
---
Always use FastAPI for routes.
Business logic goes in services/. Never in routes/.
Always add type hints.When you write rules like above, Claude picks up the right rules based on the file it is touching. This is more token efficient than putting all rules in one big CLAUDE.md.
Note: The official docs use
paths:but some versions of Claude Code work more reliably withglobs:. Ifpaths:does not work, tryglobs:with your pattern instead.
Plan Mode vs Direct Execution
By default, Claude executes tasks immediately. It reads files, writes code, runs commands without stopping.
Plan mode changes that. Claude outlines what it is going to do first. No files written, no commands run. You review the plan and approve before anything happens.
claude --permission-mode planUse plan mode for large scale or destructive changes. For small, well-scoped tasks, just run directly.
Headless Mode
Headless mode runs Claude non-interactively using the -p flag. No terminal prompts. No approvals. Used for CI/CD pipelines and automation scripts.
claude -p "Review this PR for security vulnerabilities" --output-format json > report.json
Use --output-format json when you need the output in json.
Check more about Claude’s mode in the Claude Code docs.
Hooks
We already covered hooks in the Claude Agent SDK section. The concept is the same but the implementation is different in Claude Code.
In the Agent SDK, hooks are Python callback functions. In Claude Code, hooks are shell commands configured in .claude/settings.json. They fire automatically at specific points in Claude Code’s lifecycle.
As we already saw, two hooks matter here.
PreToolUse fires before a tool executes and can block the action. PostToolUse fires after tool execution. Here is how they look in Claude Code.
{
"hooks": {
"PreToolUse": [{
"matcher": "Bash",
"hooks": [{"type": "command", "command": "echo '$CLAUDE_TOOL_INPUT' | grep -q 'rm -rf' && exit 2 || exit 0"}]
}],
"PostToolUse": [{
"matcher": "Edit|Write",
"hooks": [{"type": "command", "command": "jq -r '.tool_input.file_path' | xargs npx prettier --write"}]
}]
}
}Built-in Tools
Claude Code ships with built-in tools. As an architect, you need to know when each one is the right choice.
Tool When to use
------- -----------
Read Read the contents of a file
Write Create or overwrite a file
Edit Make targeted changes to a specific file
Bash Run shell commands
Grep Search for patterns inside file contents
Glob Find files by name or pattern
LSP Code intelligence — jump to definitions, find references, get type errors
Agent Spawn a subagent with its own context window to handle a task
WebFetch Fetch content from a URL
WebSearch Search the web
Skill Run a skill within the main conversation
NotebookEdit Edit Jupyter notebook cells
AskUserQuestion Ask the user a multiple choice question to clarify requirementsThese are the most commonly used tools. Claude Code has many more built-in tools. Check the full tools reference for the complete list.
Up Next:
That is it for Part 1.
We covered agentic architecture, MCP integration, and Claude Code configuration.
In Part 2, we will move to the remaining areas. That includes prompt engineering, structured outputs, context management, reliability, and the registration flow.
I have added the mock test materials to this GitHub page. I will keep adding more questions as I find useful ones.
If you are planning to take the exam, use this blog and the mock questions as a practical guide. But before the exam, make sure you also go through the official docs once.
See you in Part 2.

