Use Q developer to upgrade legacy lambdas
- Paulo Srulevitch
- 5 days ago
- 5 min read
Updated: 4 days ago

Updating legacy code can be a mess. Dependencies break. Logging disappears. Things you thought were isolated end up cascading across environments. But with the right tools and a clear plan, it becomes a smooth, well-documented process you’ll want to repeat.
This post is a practical breakdown of how we migrated a critical AWS Lambda function previously running on deprecated Python 2.7 to Python 3.12. No guesswork. No downtime. We used Q Developer’s conversational interface, paired with MCP Servers, to make it happen in a matter of hours. The result: A fully automated, traceable, and repeatable migration path that now sets the standard for how we handle every legacy Lambda in our environment.
Before we move forward, let’s recap the basics:
What is Q Developer?
Q Developer is an AI-powered development assistant from AWS. Think of it as a CLI interface and conversational workspace all-in-one, designed to guide developers through cloud tasks in real time. You can ask it to scan your infrastructure, suggest CLI commands, create Jira tickets, and even generate deployment plans.
But what makes it powerful isn’t just the interface; it’s how it integrates with Model Context Protocol (MCP) to manage logic, permissions, and tooling in a way that’s traceable and repeatable.
Combined with MCP (Model Context Protocol), Q Developer becomes more than a CLI assistant. It’s an orchestration layer with memory, scope, and traceability. It works equally well in modern development environments where quick iterations and automation are critical.
The Challenge
Let’s start with context. We had a business-critical Lambda tied to core automation logic still on Python 2.7, which AWS deprecated years ago. That runtime is no longer supported, lacks security updates, and blocks compatibility with modern packages.
The goal was simple: update it to Python 3.12 and ensure we didn’t miss a single log, metric, or deployment step along the way.
But it wasn’t just about upgrading the Python version. We needed to:
Migrate with zero downtime
Preserve every log group, metric, and alarm
Avoid manual steps that introduce risk
Document the process for other teams to replicate
We needed more than just a manual walkthrough. We needed a guided, interactive migration with zero room for error. In short, we needed a framework, not just a one-off fix.
What Changed Everything: A Conversational + Context-Driven Approach
This migration wasn’t done with scripts stitched together in Notion or shell prompts pasted in Slack. We ran it all through Q Chat, the conversational CLI built into Q Developer.
Using Q Chat, the natural language interface of Q Developer, we orchestrated the entire process from planning to deployment.
What made it powerful?
Real-time, guided prompts — Q told us exactly what to do, and asked for input when needed.
Automated execution using Model Context Protocol (MCP) — every AWS CLI call or Jira ticket came from structured, context-aware logic.
Zero-tab-switching — logs, validations, and even file generation happened in the same window.
Live documentation — we walked away with a full execution log, already linked to a Jira ticket, and ready to share.
It felt like pairing with an expert who knew our cloud environment and workflows end-to-end.
Tools We Used
Our stack for this migration:
Q Developer + Q Chat
MCP Servers:
awslabs.core-mcp-server@latest
awslabs.aws-serverless_mcp_server@latest
jira-mcp connected via SSE endpoint
Python 3.12, requests, urllib3, and compatible packages
AWS CLI, bash, curl, jq
Zip packaging, local temp dir /tmp/lambda
CloudWatch Logs, custom metrics, alarms
Jira workspace with MCP-connected task tracking
This wasn’t a theoretical setup — we’ve since replicated it across multiple teams
The Flow: Step by Step
We began by prompting Q Chat to identify any functions still running Python 2.7. Within seconds, it produced a list — each one tied to invocation metrics. We flagged the top priority.
Q then:
Created a Jira ticket with autogenerated context
Proposed a migration timeline
Identified IAM roles, logging groups, and deployment targets
Added rollback options to the task list
This was the planning phase, but it was already executable.
2. Prep the Workspace
Q initiated creation of a secure local folder. It pulled the current function code via aws lambda get-function, unpacked it, and offered a dependency scan.
We got an immediate view of:
Runtime-specific issues
Deprecated imports
Library conflicts with Python 3.12
Then, Q proposed a test harness to validate behavior pre-migration. This alone saved hours.
3. Refactor with Confidence
Now came the real grit, transforming legacy code to Python 3.12.
Q Chat guided us through refactoring:
Flagging deprecated functions like wrap_socket
Replacing print with logging statements
Adding type hints and modern exception handling
We used pyupgrade, black, and mypy, all orchestrated through Q’s CLI wrapper. It even offered unit test scaffolding for risky segments.
4. Package & Deploy
Once validated, Q auto-generated a zip artifact for deployment. It hashed the build, signed the config, and prompted a dry-run against a shadow environment.
From there, we:
Ran integration tests
Updated IAM permissions
Swapped out the runtime
Pushed the function live
All logs, including deployment time and validation status, were sent to Jira.
5. CloudWatch Log Migration
Old function versions had log groups we couldn’t afford to lose. Q scripted:
Group replication
Alarm reference updates
Retention policy cloning
It pushed logs from old to new ARN incrementally, tracking consistency with digest hashes.
6. Validate in Production
Post-deploy, Q prompted verification:
Real-time metrics: latency, error rate
Debug logs are enabled temporarily
Canary invocations to test edge behavior
It flagged one timeout during early invocation, which led us to optimize cold start by adjusting memory size. Q updated the Lambda setting and redeployed.
Results We Achieved
Lambda updated to Python 3.12
Zero downtime, zero rollback
Full log continuity — no loss
Jira ticket generated automatically
Complete documentation created in parallel
Deployment scripts ready for reuse
Why MCP Mattered
It Gave Us Speed
We automated repetitive CLI steps and parallelized tasks; code analysis ran while deployment prepped. That shaved hours off the timeline.
It Reduced Errors
No command mistyped. No step skipped. The AI handled the logic. We supervised, and the system delivered.
It Increased Clarity
We had live access to logs, metrics, and feedback — no waiting for dashboards to update or logs to sync.
It Documented Everything
Each action was recorded and linked to a Jira task. The trail was clean, complete, and reusable.
It Scaled Easily
The process is now reusable across other Lambdas. Log migrations were granular and controlled. Observability stayed intact after the move.
And the Ticket?
The Jira ticket was created via MCP, embedded in Q Developer. It included:
Purpose of the migration
Actions taken
Risks tracked
Success criteria
Log & metrics confirmation
It was clean, complete, and written while we were still deploying; no afterthoughts, no catch-up later.
One Last Thought
This isn’t just a story about updating a Lambda. It’s about what happens when you stop thinking of migrations as chores and start treating them like systems. The combination of Q Developer, MCP, and a well-thought-out flow transformed a legacy problem into an automated, auditable upgrade — and gave us a blueprint we can trust.
If you’re looking to modernize infrastructure without chaos, this approach works. We’ve proven it.
Want to scale this to your environment? Let’s talk. We’re ready to help — from prototype to production.



Nicolas Rossi Martin Carletti Valentino Gabrieloni
Solutions Architect Cloud Engineer Cloud Engineer