Setup - Your AI coding environment
15 min read · 1-2 h for setup and testing
How many times have you rebuilt the same setup from scratch - a new project, a new Copilot, zero context, starting as if the previous few months never happened? And Copilot once again knows nothing about your project and suggests libraries you do not use. Every reset costs time and focus you will not get back.
I sit in front of two tabs, each with 4 open terminals. Some agents work locally, some in the cloud, and on a more intense day I can have as many as 20 agents working in parallel. More than 10 on one not-very-powerful machine can very quickly eat all memory and CPU. It also happens that I close the laptop, go to sleep, and in the morning a finished result is waiting for me from a few tasks that wrapped up without me.
And that is exactly why this setup makes such an impression: it no longer looks like “I have a clever AI to help me.” It starts to resemble a small, well-organized operating company built around one developer. In this chapter I am not going to open up the whole backstage yet. We will build the foundation: a working terminal, GitHub Copilot CLI, your first repo instructions file, your first test, and a simple workspace model that we will build the rest on later.
One important thing before we go further: I show this course on Copilot because I want to operate on one concrete runtime and on the official docs, not on a mix of loose analogies. But the pattern itself is broader than one product. Claude, Codex, Antigravity, and similar tools differ in file names, UX details, and the way some functions are enabled, but in practice they rely on the same building blocks: always-active instructions, rules for selected folders or file types, agent definitions, sets of tools, and a validation loop. So if you use another tool, this course will still be useful to you. The exact paths and file names will change. The orchestration logic will not.
Lesson 1: From one terminal to scale that starts to look like a company
The hook is simple: before I show you how to configure this, I want you to see what is at stake. This course is not about how to write one clever prompt. It is a course about turning your laptop into a workstation where you actually ship faster, more calmly, and with more range.
In my setup you feel the scale first, and only later do you learn the mechanics. And that matters: at the start you do not need to understand the whole orchestration. It is enough to see that this is not a tutorial for playing around, but a real working system across multiple projects.
At this stage you do not need to see the whole kitchen yet. It is enough to catch the pattern: over time specialized agents appear, separate skills with their own SKILL.md, more precise instructions for selected folders, and ready-made slash prompts for repeatable tasks. An arsenal like that does not come out of nowhere. It starts with good setup and good context.
When you want to show that scale publicly, but without revealing the backstage, a listing like that looks more like this:
design
coder
code-review
deploy
reporter
test-fixer
This is also the moment when my own way of thinking changes. When I open a workspace like this, I no longer think “it is just an editor and a chat.” I think something closer to: this is an environment where different roles can operate in parallel, and I hold the direction and the quality bar. I will show the exact effect of that scale only in Chapters 4 and 5. Right now we are building the first, most boring and at the same time most important block.
And here is the key point at the beginning of the course: an ecosystem like this starts with one file. One AGENTS.md or one repo-wide instructions document and one agent. Yours will start that way too. And it will look completely different from any example shown here - and that is exactly the point of this course. You are not copying someone else’s setup. You are building your own.
Exercise
Open your current repo and answer three questions for yourself:
- Which 3-5 areas of the project would an agent need to know so it stops guessing?
- Where are your most important directories?
- Which 2-3 rules are so important that the agent must not break them?
If you cannot answer those immediately, that is a very good signal that a repo instructions file will genuinely be useful to you.
Summary
For now, we are not building the whole company of agents yet. We are setting up the desk, the network, and the onboarding. That is enough to make the first working proof in a moment.
Glossary - key concepts
Before we move on, it is time to name a few things directly. This course uses specific terms, and it is worth understanding them operationally from the start, not intuitively.
Agent - a system made up of a model, an action loop, a goal, tools, and exit criteria. This is the working definition for the whole course. Not chat, not a macro, not a script. The LLM is the reasoning engine that interprets the goal and decides what to do. The loop is the iteration mechanism - the agent acts, checks the result, and acts again. The goal is a precise description of the end state: what must be ready. Tools are the concrete operations the agent can perform: run a bash command, read a file, save a result, call git, query an API. Exit criteria are the condition that ends the work - without them the agent runs indefinitely or stops in the wrong place. That exact difference separates an agent from a normal chat: chat answers once, an agent iterates until the job is done.
Tools - functions available to the agent while it works. Without tools, an agent is only a language model locked inside a text field. With tools, it becomes a worker that does something real: commits, writes tests, calls CI, deploys. In this course you will see tools such as bash, git, file read/write, and API calls.
More concepts - Fleet, Forge, Chain, Exit criteria - are waiting for you in Chapter 2. You will need them there, because you will run them yourself.
Lesson 2: Installation - Warp + GitHub Copilot CLI
Before I show you how to install it - a short story that explains why I ended up in the terminal instead of an IDE.
When I started with Copilot in VSCode, I would easily fall into a strange rhythm: I gave the agent a task, watched it spin slowly, and because I got bored I would grab my phone and open Instagram. I was burning time instead of working in parallel. The natural impulse is to open two VSCode sessions at once, then three - but at three the computer started overheating, Copilot timed out, or VSCode just crashed completely. After a crash like that, when I lost the context of three parallel conversations at once, GitHub Copilot CLI stopped looking like a curiosity and started looking like a solution.
This setup does not need to take two days. In practice you need two things: a terminal you like working in, and GitHub Copilot CLI. In this chapter I show everything in Warp, because it demonstrates multi-window work well. But something else matters more: Copilot CLI also works outside Warp. The terminal should help you, not block the entrance.
Copilot CLI is the execution environment for agents - it gives them access to the terminal, files, and git as tools. Without that execution layer, an agent remains only a language model. With it, it starts working for real.
Why Warp?
Warp is convenient because of easy window management, command suggestions, general comfort, and the fact that once the number of sessions grows, it stays easy to switch between them. But if you prefer iTerm2, Alacritty, Windows Terminal, or the standard system terminal, you can still go through this entire chapter without Warp. What matters most is that you have a stable place to launch copilot.
Installation
Copilot CLI works on macOS, Windows, and Linux. Official installation docs: https://gh.io/copilot-install
Quick path if you already have npm:
npm install -g @github/copilot
copilot --version
copilot
After copilot you will get the CLI start screen. If it asks you to log in, type /login inside the running session. That is it - from there we will be writing agents, not installing the tool.
Verification
This is not about “something got installed.” We want the first concrete proof that the environment actually works.
# 1. Is the CLI visible?
copilot --version
# 2. Start the CLI
copilot
# 3. If needed - log in inside the session
/login
# 4. Accept trusted directory for the current project
# 5. Ask two test questions:
# What is the tech stack of this project?
# What are the cardinal rules of this project?
If the agent answers concretely - based on your AGENTS.md or repo-wide instructions, not with generic filler - the setup works. This is the moment when you see that the context really works. If not, see the section below.
Troubleshooting
If something does not work, in most cases it is one of these:
- You are not logged in - start
copilotand type/logininside the session. - You did not accept the prompt about access to the directory - the agent cannot see the repo. Start again and confirm it.
- You do not have an active GitHub Copilot subscription - the CLI will start, but it will not answer correctly.
- You chose the npm path without Node.js installed - install Node.js and try again.
- You have a networking issue in WSL2 - run
wsl --shutdownand try one more time.
Exercise
Install copilot on your system and get to the point where you can enter a CLI session without errors.
You have just laid the foundation. Proof:
copilot --versionreturns a version number, and thecopilotsession starts without errors. Not every developer has this configured correctly - now you do.
Summary
You already have the first block: a working tool. Not smart yet, not specialized yet, but ready. Now it needs onboarding.
Lesson 3: Your first repo instructions file - from generic AI to a specialist for your project
The simplest analogy is still the best one: a new developer without onboarding gets lost. AI without context does exactly the same. That is why a repo instructions file is not an extra. It is the document that sets the role, the boundaries, and the map of the project.
Most often, the best starting point is one file in the repo: AGENTS.md. It is the most portable place to start, because that format is understood not only by Copilot, but also by other agent tools. If you work mainly in Copilot, you can also keep the same content in .github/copilot-instructions.md. In practice, both files play a similar role: they establish the always-active context for the repo. The difference is mainly in the naming and in how closely you want to stay on Copilot-native surfaces.
As the project grows, you add the next official layers: .github/instructions/*.instructions.md for rules that depend on path or file type, .github/agents/*.agent.md for specialized agents, .github/skills/<skill>/SKILL.md for portable capability folders, and .github/prompts/*.prompt.md for ready slash prompts in VS Code. ~/.copilot/ also exists, but as a user-level runtime/config directory: for agents, skills, hooks, and configuration, and in VS Code also for user-level .instructions.md files. Those two layers can coexist. In practice, you often have a shared set of global agents and skills in ~/.copilot/agents and ~/.copilot/skills, and you only reach for .github/agents/ and .github/skills/ once a given project needs its own local additions.
It is also worth knowing the Copilot-native repo-wide surface: .github/copilot-instructions.md. In my real repos, that file is the entry point for always-active instructions today, so below you will see examples taken directly from there. That is not a contradiction. For a student, I recommend AGENTS.md as the first move, because it is more portable and less confusing. In my working monorepo, I keep .github/copilot-instructions.md, because the whole system there is already organized around it.
And one more important thing: you do not need any theatrical persona. You do not have to write that the agent “is someone” in a fictional sense. A simple working role description is enough, something like: an AI assistant working on this project, writing in Polish, and not breaking defined rules.
In my mature global workspace, a document like that is already much richer. Below I am showing an example of this kind of section from a real .github/copilot-instructions.md.
## Cardinal rules
1. **Human decides** - Rafal approves: breaking changes, architecture, pricing, UX. Ask before you do anything irreversible.
2. **Do not push directly to the main production branch** - work on feature branches and merge changes through Pull Request; if the repo has an intermediate stage, describe it explicitly and stick to that path
3. **Do not break the system** - Run tests before and after the change. If something fails, fix it or roll it back.
4. **Ask when unsure** - `ask_user` for critical decisions.
Later, when you ask the agent about project rules, this is exactly the kind of result you expect: it should quote concrete rules, not invent its own safe list of generic advice. Notice the second rule. The point is not a magic formula for one ideal flow, but a clear description of the safe path for making changes in a given repo.
And here is an important clarification: this is a rule to adapt to your environment, not a universal recipe for every repo. The broader point is simpler: do not push directly to the main production branch. If your project has an extra intermediate stage, such as staging, preprod, or a separate integration branch, describe it explicitly. If you do not have that layer, the rule “do not push directly to the main branch” is completely enough to start.
The second important element is a map of the codebase. This is where you can clearly see the difference between “AI is guessing something” and “AI knows where to look.” Below is a fragment from the real file /home/raff/projects/awesomeworks/callwise/.github/copilot-instructions.md.
## Codebase discovery
| Looking for... | Check |
| ----------------------------- | --------------------------------------------------------------- |
| All DB models (58 tables) | `backend/models.py` |
| Routers / endpoints | `backend/app/main.py` + `backend/app/routes.py` + `*_routes.py` |
| Celery tasks | `backend/tasks/` |
| Auth / JWT validation | `backend/auth.py` |
| Frontend hooks (per domain) | `frontend/src/hooks/` |
| Design system | `docs/design-system.md` |
After you ask about architecture or endpoints, the agent now has a very concrete starting point. Instead of wandering through the repo, it goes straight to the right files. That is the whole point.
At the start, though, you do not need to write 200 lines. In the course you have a ready-made template in the file templates/agents-template.md.
# [Project Name] - AGENTS
## Role of the agent
[Briefly: who the agent is in this repo and what language it should use]
## Cardinal rules
1. **Human decides** - critical decisions (breaking changes, architecture, pricing) always require human approval
2. **Do not push directly to the main branch** - work on feature branches, and merge changes through Pull Request; if you have an intermediate environment such as `staging` or `preprod`, route PRs there first
3. **Do not break the system** - check tests before the change, run tests after the change
4. **Ask when unsure** - better to ask than to guess
## Repo map
## Tech stack
## Testing and validation
## Response format
Once you save a file like that, the agent gets its first meaningful onboarding. And one more practical note: the template is intentionally simpler than a mature workspace. That is good. In Chapter 1, you want a working foundation, not an encyclopedia of everything. The minimum at the start is 5 sections: role, cardinal rules, repo map, tech stack, and validation.
If you work only in Copilot and prefer a Copilot-native layout, you can copy the same content into .github/copilot-instructions.md. The logic of the document stays the same.
If later you start working across multiple repositories, you can add a global runtime with agents and skills in ~/.copilot/, and leave repo-level instructions for the context of a specific project. But for now, let us stay with a simple foundation: one repo, one good document, zero chaos.
Exercise
Create AGENTS.md in your repo and fill in at least 5 sections:
┌─────────────────────────────────────────────────┐
│ AGENTS.md - minimum structure │
├─────────────────────────────────────────────────┤
│ 1. Role of the agent (who the agent is) │
│ 2. Cardinal rules (what is / is not ok)│
│ 3. Repo map (where everything is)│
│ 4. Tech stack (what we use) │
│ 5. Validation (how to check output)│
└─────────────────────────────────────────────────┘
If you want to start faster, below you have a generic starter to copy and adapt:
# AGENTS.md
## Role of the agent
You are an AI assistant working in this repository.
First read the code and the project context, then propose or make changes.
Do not guess architecture or intent if they can be established from the files.
Reply concisely and show the operator only what helps them make the next decision.
## Cardinal rules
1. First understand the existing code, then edit.
2. Do not introduce breaking changes without an explicit signal.
3. After changes, run the narrowest sensible validation.
4. Do not touch unrelated files just to "clean things up".
5. If a decision affects architecture, cost, security, or deployment, stop and ask for a human decision.
## Repo map
- Backend entry point: [where the application starts]
- Domain logic: [where services / use cases live]
- UI / frontend: [where screens and components live]
- Tests: [where tests and smoke checks live]
- Documentation: [where docs, ADRs, PRD, design docs live]
## Tech stack
- Backend: [language, framework, ORM]
- Frontend: [framework, language, styling]
- Database: [engine]
- Infra / CI: [deployment, pipeline, hosting]
## How to work in this repo
- First look for the nearest place that actually controls the behavior.
- Prefer small, local changes over broad refactors.
- If there is a neighboring test for the logic you are changing, use it as the first validation.
- If the repo has an existing pattern, match it instead of introducing a new style without a real reason.
## Testing and validation
- Backend: [test command]
- Frontend: [test command]
- Lint / typecheck: [command]
- Smoke / manual check: [what to verify after the change]
## Response format
- Say briefly what changed.
- Say how to check it.
- If there is risk or missing validation, state it directly.
- If the next step is obvious, suggest it instead of leaving the user with an empty "done".
If you want, you can later mirror the same content into .github/copilot-instructions.md, but at the start you do not need two files at once.
Minimum target: 40+ lines of real context about your project.
You have just turned a generic chatbot into a specialist for your project. Proof: the file
AGENTS.mdhas at least 40 lines - concrete architecture, stack, and rules the agent could never have guessed on its own.
Summary
At this stage, the agent does not need to be brilliant yet. It just needs to stop guessing. A good repo instructions file does exactly that.
Lesson 4: First test - checking whether the agent is really reading your context
This is my favorite moment in Chapter 1, because this is where the theory ends. You have a terminal. You have copilot. You have your instruction file. Now you need the first proof that it all actually clicks.
The simplest test looks like this:
copilot
If the session asks for authorization the first time, type this inside it:
/login
And then ask two questions:
What is the tech stack of this project?
What are the cardinal rules of this project?
Good output looks very concrete. The agent should:
- refer to technologies that you actually listed in the file,
- quote or paraphrase the rules from
AGENTS.mdor repo-wide instructions, - use the language and tone that match the instructions,
- point to real directories or files if the question requires it.
You will also recognize bad output immediately. If the agent asks you what framework you use, guesses libraries, or answers like a generic chatbot from the internet, it means the context did not land.
In my setup, that kind of test quickly exposes quality. When I ask about a project with a well-prepared code map, the agent comes back with things like: backend/models.py, backend/app/main.py, backend/auth.py, frontend/src/hooks/. Not because it has magical intuition, but because it got good onboarding and knows how to connect it to real files.
If you want to run a third test, ask: “Describe the directory structure and tell me what I can find where.” That usually exposes the difference between a project with context and a project without it really well.
Exercise
Ask the agent 3 questions about your project:
What is the tech stack of this project?What are the cardinal rules of this project?Describe the directory structure and tell me what I can find where.
The success criterion is simple: the answers should come from your file and your repo, not from improvisation.
You have just seen the difference between AI that guesses and AI that knows. Proof: the agent’s answers quote your stack and your rules - not generic examples from the internet.
Summary
You already have the first proof. Not a promise, not marketing, but real agent behavior that reads your context and answers from the project.
Lesson 5: Workspace structure and what comes next
A working agent in one repo is a great start. But it is worth also seeing what a workspace looks like once, over time, it turns into a bigger system of work. Not to overwhelm you. So that you know this chapter is the beginning of something larger.
Below is a fragment from the real file /home/raff/projects/awesomeworks/.github/copilot-instructions.md, section Monorepo.
## Monorepo
| Project | Path | Description | Instructions |
|---------|---------|------|------------|
| **CallWise** | `callwise/` | AI Call Scoring SaaS - **core product, production** | `callwise/.github/copilot-instructions.md` |
| Background | `background/` | Background worker tasks (Celery) | `background/.github/copilot-instructions.md` |
| CallWise Mobile | `callwise-mobile/` | Mobile app | `callwise-mobile/.github/copilot-instructions.md` |
| CourseAI | `courseai/` | AI coding course | `courseai/.github/copilot-instructions.md` |
| Ask | `ask/` | Ask AI - product chatbot | `ask/.github/copilot-instructions.md` |
| Prompts | `prompts/` | Agents and skills (**separate git repo**) | this file |
| Docs | `docs/` | Cross-project documentation | - |
| Scripts | `scripts/` | DevOps / utility scripts | - |
And if you want to remember it as a simple template that matches the official docs and at the same time stays close to a real multi-repo working setup, I draw it like this:
<workspace-root>/
├── shared-runtime/
│ ├── agents/
│ │ ├── planner.agent.md
│ │ └── reviewer.agent.md
│ ├── skills/
│ │ ├── debugging/
│ │ │ └── SKILL.md
│ │ └── refactoring/
│ │ └── SKILL.md
│ └── workspace-defaults.instructions.md
├── project-1/
│ ├── AGENTS.md
│ └── src/
├── project-2/
│ ├── AGENTS.md
│ └── app/
└── worktrees/
└── project-1-feature-1/
├── AGENTS.md
└── src/
At this stage, I am showing worktrees/ only as a signal of scale. We will come back to practical use only in Chapter 12, where I will show you how to use git worktrees to work in many terminals on the same repository without getting in your own way or your agents’ way.
One important naming detail here:
AGENTS.mdis the portable layer of repo-wide instructions and a good default starting point,.github/copilot-instructions.mdis the Copilot-native layer of repo-wide instructions,*.instructions.mdare narrower rules for a specific path or file type,- an agent is a single
.agent.mdfile, - a skill is a separate folder with
SKILL.mdand optional scripts or examples, - a prompt file is a
.prompt.mdfile, meaning a ready slash prompt in VS Code.
And now separately, the base user-level layer:
~/.copilot/
├── agents/
│ ├── planner.agent.md
│ └── reviewer.agent.md
├── skills/
│ └── debugging/
│ └── SKILL.md
├── instructions/
│ └── workspace-defaults.instructions.md
├── config.json
├── mcp-config.json
└── hooks/
That is the simplest layout a user can set up manually without an extra layer of publishing, symlinks, and runtime versioning.
If you work in VS Code, you have one more option: you do not have to rely only on the default ~/.copilot/. VS Code documentation lets you attach your own customization locations through settings.json. For instructions, that is chat.instructionsFilesLocations, and for the other layers there are analogous settings: chat.agentFilesLocations, chat.agentSkillsLocations, and chat.promptFilesLocations. In practice, that means you can plug in your own folder with instructions, agents, skills, or prompts without manually placing everything exactly in the default user-level paths.
That distinction matters. The project repo is the source of truth for project context. ~/.copilot/ is the user-level runtime/config directory. In a more advanced setup, you can keep shared agents and skills in a separate folder or even a separate repo, version them normally in git, and then synchronize them into ~/.copilot/agents and ~/.copilot/skills so they work in every session and every repo. But you do not need to describe or build that mechanism in Chapter 1. Here the only thing that matters is the distinction between layers. In practice, that means:
- keep project rules mainly in
AGENTS.mdor.github/copilot-instructions.md, - put more precise rules into
.github/instructions/*.instructions.md, - keep shared, multi-repo agents and skills outside the project repo and later synchronize them into
~/.copilot/agentsand~/.copilot/skills, - use
.github/agents/*.agent.mdand.github/skills/<skill>/when a project needs local additions specific only to that repo, - save repeatable slash tasks in VS Code as
.github/prompts/*.prompt.md, ~/.copilot/agentsand~/.copilot/skillsare the user-level runtime for things shared across projects,- and in VS Code, user-level
.instructions.mdfiles can live in~/.copilot/instructions/.
After a fragment like that, the agent immediately understands that it is not working in a single folder thrown into git in a hurry, but in a broader environment where different directories have different roles and different levels of their own instructions. This is exactly the moment when the setup starts to look more like company infrastructure than a developer’s private notebook.
And here is a very conscious boundary. At this stage, the point is only to suggest scale:
- from one terminal you can grow to 7,
- part of the work can happen away from your local screen,
- some results can land while you are doing something else or simply sleeping.
But we are not going to map out the exact flows, agent selection tables, background mechanics, or who talks to whom underneath yet. That is the payoff of later chapters. Here the goal is that you leave Chapter 1 with the feeling: “okay, I can see the scale, but I already know how to take the first working step.”
Exercise
Draw a simple map of your workspace or repo:
- the main directory,
- the 3-7 most important subdirectories,
- the place for
AGENTS.md, - one sentence about what each important area is for.
That will be very good raw material for the later chapters.
You have just created the map the agent will be looking for throughout this course. Proof: you have a drawn workspace structure with at least 3 subdirectories and a place for instructions - the foundation for the layers that come next.
Summary
You now have not only setup, but also direction. You know what a small working foundation looks like, and you can feel that the scale grows from here. That is exactly the effect Chapter 1 is supposed to create.
Chapter exercise
Task: Build your AI coding environment from scratch and confirm that it works on a real project.
Deliverable:
- a working terminal with
copilot, - an
AGENTS.mdfile in the repo, - three agent answers based on the real context of your project.
Steps:
┌─────────────────────────────────────────────────────────────┐
│ Copilot CLI Setup - 6 steps │
├───┬─────────────────────────────────────────────────────────┤
│ 1 │ Terminal → Warp or your preferred terminal │
│ 2 │ Copilot CLI → install GitHub Copilot CLI │
│ 3 │ Repo → open the project repository │
│ 4 │ Files → create AGENTS.md │
│ 5 │ Fill → fill in at least 5 sections │
│ 6 │ Test → run copilot + ask 3 questions │
└───┴─────────────────────────────────────────────────────────┘
Success criteria:
copilot --versionreturns a version.- The file
AGENTS.mdhas at least 40 lines of meaningful content. - The agent can state your stack without guessing.
- The agent can quote the cardinal rules from your file.
- The agent can point to specific directories or files in response to a question about the project structure.
Verification:
# 1. Does the setup work?
copilot --version
# 2. Start the CLI
copilot
# 3. If needed, log in inside
/login
# 4. Check file length (Bash / WSL)
cat AGENTS.md | wc -l
# 5. Ask the test questions
# What is the tech stack of this project?
# What are the cardinal rules of this project?
# Describe the directory structure and tell me what I can find where.
After that sequence, you should see a version number, a running CLI session, and answers that refer to your file rather than guesses.
# PowerShell: count file lines
(Get-Content AGENTS.md).Count
In PowerShell, the expected result is a number of at least 40.
You have a working environment and an agent that reads your context instead of guessing. The first step is done - and it is not a small step. But AGENTS.md is only onboarding. The agent is still waiting for a real assignment. In the next chapter, you will create your first agent from scratch: from an empty .agent.md file to a working worker that reads your code, analyzes it, and returns something genuinely useful - in 30 minutes.
Summary
In this chapter, you did exactly what had to happen at the beginning: you laid the foundation. You have a working environment, your first AGENTS.md, your first test, and your first proof that an agent can read your project instead of improvising.---