Why I use GPT as Claude Code’s Designated Hitter
How I add a second perspective inside of Claude Code using a custom skill
A funny thing happens when you talk about AI in public long enough. People stop asking what you are building and start asking what you are using to build it.
Over the last few months, I have gotten the same question in different forms, over and over.
Should you only use Claude Code?
The short answer is that it’s what I primarily use, but it’s technically not the only thing I use.
The thing people miss about Claude Code
Claude Code is my primary driver. I spend around 99 percent of my time there. I pair it with Cursor when I need direct access to the codebase in an editor. That is the answer I usually give when people ask me about my setup.
What most people do not know is that I also use GPT-5.2 High.
I use it as a second brain that I can invoke when I hit a wall.
Yes, through Claude Code.
When you are stuck, the limiting factor is rarely Claude Code’s capabilities. It is perspective. You are anchored. Your assistant is anchored. You are both staring at the same tree, trying to understand the forest.
A different model, given a tight bundle of context, is often enough to loosen the knot.
So I built an escape hatch.
A Claude Code skill called askgpt.
When I invoke it, Claude packages up the relevant context. The error. The involved files. Any nearby code that matters. Plus whatever prompt I write. The skill sends that bundle to OpenAI. GPT responds. Claude takes that response into consideration, and I keep moving.
About 98 percent of the time, Claude Code handles everything. The remaining 2 percent is where I get stuck, and where Claude gets stuck with me. That is the segment this skill is for.
This post gives you all the files to install it.
How I think about tools
I tend to align myself with technologies in their ascendency, because that is where momentum, leverage, and real progress tend to concentrate. The best tools are rarely perfect. They are simply the ones doing the most useful work at the right moment in time.
Claude Code, in my opinion, is one of those tools.
I think of this setup, with a GPT skill, as the way a baseball team thinks about a designated hitter.
Claude Code plays the full game. It fields. It runs. It thinks across innings. It holds the context. It does the hard, unglamorous work.
GPT comes in fresh, off the bench, with one job.
Hit.
No history. No fatigue. Just a clean swing at the problem in front of it.
Most of the time, you do not need that swing. When you do, it matters that it is there.
Install
1) Choose where the skill lives
Pick one:
Personal skill (available everywhere):
mkdir -p ~/.claude/skills/askgpt/scripts
Project skill (only in this repo):
mkdir -p .claude/skills/askgpt/scripts
2) Set your OpenAI API key
macOS or Linux:
export OPENAI_API_KEY="sk-..."To make it permanent, add the same line to ~/.zshrc or ~/.bashrc, then reload your shell.
3) Create the files
You are creating three files:
SKILL.mdrequirements.txtscripts/askgpt.py
Copy and paste exactly.
File: SKILL.md
Create this at:
~/.claude/skills/askgpt/SKILL.md
or.claude/skills/askgpt/SKILL.md
---
name: askgpt
description:
Send curated project context and a user prompt to OpenAI GPT-5.2
using high reasoning effort. Use when debugging or when a second
opinion is needed.
disable-model-invocation: true
allowed-tools: Bash
---
# askgpt
Run this skill only when explicitly invoked by the user.
## Instructions
1. Confirm the user’s prompt if it is not already clear.
2. Collect relevant context:
- error messages or stack traces
- the files directly involved
- nearby code required for understanding
3. Bundle the prompt and context into a single text payload.
4. Run the script and pass the payload through stdin:
```bash
python3 scripts/askgpt.py
A note on that front matter: `disable-model-invocation: true` is the key. This keeps it from firing constantly. You invoke it when you want it.
Output
Return GPT’s response verbatim.
File: requirements.txt
Create this at:
~/.claude/skills/askgpt/SKILL.md
or.claude/skills/askgpt/SKILL.md
openai>=1.8.0File: scripts/askgpt.py
Create this at:
~/.claude/skills/askgpt/scripts/askgpt.py
or.claude/skills/askgpt/scripts/askgpt.py
#!/usr/bin/env python3
import os
import sys
from openai import OpenAI
MODEL = os.getenv("ASKGPT_MODEL", "gpt-5.2")
REASONING_EFFORT = os.getenv("ASKGPT_REASONING_EFFORT", "high")
def main() -> int:
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
print("OPENAI_API_KEY is not set", file=sys.stderr)
return 1
payload = sys.stdin.read().strip()
if not payload:
print("No input received on stdin", file=sys.stderr)
return 2
client = OpenAI(api_key=api_key)
response = client.responses.create(
model=MODEL,
input=payload,
reasoning={"effort": REASONING_EFFORT},
max_output_tokens=1200,
text={"verbosity": "high"},
)
output_text = getattr(response, "output_text", None)
if output_text:
print(output_text.strip())
return 0
chunks = []
for item in (response.output or []):
for content in (item.content or []):
if getattr(content, "type", "") == "output_text":
chunks.append(content.text)
print("\n".join(chunks).strip())
return 0
if __name__ == "__main__":
raise SystemExit(main())
Optional: make it executable
chmod +x ~/.claude/skills/askgpt/scripts/askgpt.pyHow to use it
Restart Claude Code or open a new session.
Then write what you want as if you were messaging a strong engineer and add “ask gpt” to the prompt, along with:
the goal
the symptom
the error or behavior
what you already tried
which files seem relevant
Claude will package the context and run the script.
Closing
I am going to keep talking about Claude Code because it is still where I spend my time. That is not changing anytime soon.
This skill is a small addition that makes my workflow more resilient. When the conversation gets long, when the codebase is messy, when the bug is stubborn, it gives me a second perspective without changing my environment.



What other skills do you rate?