Hit the Ground Running: Strategies for Technical Leaders to Accelerate Onboarding

Ramping up as an engineering leader


Technical leaders have to ramp up in a new area before they work effectively, whether they are managers or individual contributors. This happens in various situations, such as when you change jobs, transfer teams, get promoted, change roles, or begin a product or project in a new domain. You will likely need to be familiar with many areas of knowledge.

Depending on your role, this may include software products (code), product roadmaps, company vision, mission, strategy, market and competitive environment, customer base, organizational and team culture, the teams and individuals involved, processes, operational tools, development environments, CI/CD setups, IaaC automation, and sometimes a new programming language.

Focus your efforts. State your purpose as clearly as you can at the beginning. “Begin with the end in mind” is habit #2 in Franklin Covey’s The Seven Habits of Highly Effective People” (2013). For example, “Identify the top challenges and opportunities in our organization that need increased staffing or resources” or “Define the system architecture changes needed to address recent scaling and reliability issues.” An overlapping concept is part of the McKinsey problem-solving framework, “Solve at the first meeting with a hypothesis” (Stareva, 2018). According to McKinsey Mind, “using an initial hypothesis to guide your research and analysis will increase both the efficiency and effectiveness of your decision-making.” (Rasiel et al., 2001). You should have one or more key hypotheses that you seek to either prove or disprove objectively with the time and resources available. These should be actionable in terms of making decisions that will further both the goals of the business and your career.

I think of being in ramp-up or research mode as temporarily being an investigative journalist. You must gain working knowledge and reach conclusions objectively, accurately, and efficiently to make meaningful decisions and contribute effectively. This leads us to the five W’s and one H: who, what, when, why, where, and how. This is an established mental model in journalism and rhetoric (Five Ws., 2024). For business and engineering purposes, we should start with who, what, and why. What are we trying to do? Why do we think that is important? Who is involved, and who do we need to talk to for their specific expertise or influence?

Then we get into the when and how. Where is not always relevant, but can be in terms of where you hire staff or what regions you market a product or run a service in. How is primarily the job of engineers and technical leaders to figure out after the what, who, and why are (mostly) clear. When is generally a joint exercise between engineering and product in balancing scope and risk versus time and cost.

Why: Organizations exist to create some type of change in the world. At the bare, cynical minimum, businesses seek to make a profit for their investors or owners, but hopefully they also have some values and goals beyond profit. Employees in organizations have their own reasons for being there that vary in level of alignment with the team and the organization as a whole. Ultimately, none of those reasons will be accomplished if you do not deliver value to customers through your products or services.

Trust, but verify. Remember the origin of the information and keep a rough mental score for the weight of each input. Score things highest that you learned by direct observation, a bit lower for a document that has been created and reviewed by knowledgeable people, and less for discussion with a subject matter expert, etc. This is basically like PageRank for fact-gathering. Do I know this because I directly inspected the code implementing something until I understood it? Do I know this because a technical specification said so at the start of the implementation of the project? Do I think this because a team’s tech lead, product manager, or engineering manager told me so? If this came from another person, do they have a reason for significant bias? Are they well-informed? Is their information likely to be current or outdated? Do I have a bias regarding them or the information they shared?

Start with breadth first search. Go broad until you know what the major areas of problems and changes are. Build a solid understanding of the high-level context before you circle back to go deeper into more specific areas so that you do not get stuck in the weeds focused on something that is not a critical problem or important opportunity before you have the big picture. You can classify this as avoiding premature optimization. Once you identify areas that need more careful attention, then recommend further discovery work by others or go deeper yourself. Prioritize areas of interest to decide what to do yourself, recommend for others to investigate, or delegate and oversee.

When you begin learning something, it is more difficult to be self-directed, but as you progress, you should be making the decision of where to go deeper based on what you have already learned. Typically, you should start with a team’s onboarding documents, if available, and with meeting a few of the key leaders on the team. In a software engineering setting, I usually want to talk to the engineering manager, the tech lead for the team, the product manager, and the technical program manager. Sometimes there is not someone in all these roles, or more than one, or they are spread across teams, so adapt as appropriate.

Other sources of information I usually rely on include reading system design documents, technical specs, product specs, business requirements documents, planning documents, and PR/FAQs. For a more detailed list of possible sources, see the appendix to this article.

Find the controversies and areas without consensus. One question you can ask to find out what the debated areas are is an interview question from Peter Thiel’s book Zero to One: Notes on Startups, Or How to Build the Future (2014): “What important truth do very few people agree with you on?” When learning about a new team, product, project, or system, it’s essential to be specific by qualifying your focus.

Another area of importance is “abandoned buildings,” meaning system components or processes that have little effective ownership either due to personal or organizational changes or just consistently losing out in prioritization. Another version of this is what I call “haunted houses,” which are places in the code or systems where engineers are afraid to make changes because they are poorly understood, hard to test, or full of some other kind of sharp edges.

Use the information actively. Summarize it, update the documentation, and write your own reference material. At the end of these exercises, I typically write up some kind of summary or position paper on where things stand, accompanied by a list of references to other documentation, code, issues, and other references.

If you do not need to do this for a review, you can instead do the exercise to augment and refresh onboarding documentation for the next person to join your team or project.

For software systems and technology projects, you want to understand things at different levels. You want something akin to an understanding at a fractal level of detail, from the high-level architecture and business context down to the finer details of the code, infrastructure, and logs.

For example: code, tests, build system, deployment, infrastructure, observability, monitoring, system design (component or service level), data stores, data schemas, communication APIs, messaging and streaming systems, data pipelines, and finally the top, high-level architecture. None of this makes sense without the business context of the user’s use cases, the business requirements, and the competitive environment.

Use the tools available to speed up the process.

  • Coding assistants like Copilot, if available,
  • Tools like Sourcegraph Cody
  • IDE features for looking up declarations of and references to symbols and generating class diagrams
  • Tracing tools that can help you visualize the flow of service calls in online systems

Avoid common pitfalls. According to Watkins in his work The First 90 Days, it is crucial to follow a comprehensive framework because what works well in one situation may not work in another (Watkins, M. 2013). Watkins goes on to list some common traps to avoid based on a study interviewing experienced leaders. Two of these are “sticking with what you know” and “falling prey to the action imperative.” (Watkins, D. 2013, p. 5). These are easy tripwires to hit due to our nature as humans. We feel a need for activity and seek areas of familiarity when we are anxious or stressed, as is often the case in a new role.

Planning your research

Discovery and ramp-up, especially when focused on a specific goal, are fundamentally research processes. You should put some structure around the project management aspects of it. Once you have defined your primary goal, start working backwards from how long you have for the discovery phase to reach conclusions and make any key decisions to take the necessary action. This may be a scheduled review date, the end of some planning cycle, or just a target end date you or others set at the start. If you do not have a date, give yourself one and use it to scope things appropriately.

Break down the project into milestones. Here are some suggested milestones, along with an idea of what work you would complete for each. Set target dates for the milestones to track progress.

  1. Identify key stakeholders. Determine the individuals or groups who have a vested interest in your success or who possess knowledge essential to your role. This may include team members, cross-functional partners, senior leaders, customers, or subject-matter experts.
  2. Find and prioritize Sources: Enumerate the primary sources you know about already and plan which ones to focus on. These could include technical specs, project planning documents, issue trackers, code reviews, source code, user interfaces, customer feedback, individuals to interview, and other types of artifacts. See the appendix for a list of possible sources. The availability and applicability of these vary, and their prioritization depends on the situation.
  3. Gain an overview. This should result in a high-level understanding of the situation, key problems, and opportunities. Talk to key players and examine high-level documentation until you reach an understanding and gain conviction in your goals for this project. You also need to identify your initial sources, both interviewees and documents.
  4. Generate hypothesis. You should settle on one or more hypotheses based on early findings. These should be chosen so that reaching a conclusion about them will allow you to make the most important, actionable decisions for your goals and situation.
  5. Go deep into the target areas. Identify additional sources to go deeper. Who will you interview, and what sources will you examine? These should provide evidence for or against your working hypotheses.
  6. Create a draft report. State your conclusions and reasoning. Get feedback from others. Start with near coworkers, but then move outward intentionally, including seeking out those likely to push back against your conclusions to pressure-test your reasoning. If someone has a concern, capture it directly in your document in a way that matches the objective facts available.
  7. Get buy-in. You need to obtain the support of key decision-makers and stakeholders to execute effectively. This should include project sponsors; how high-level depends on how big a project is. It should also include key partners that your plan depends on for execution. If you are going to a review or approval meeting of some kind, make sure you have key people aligned to lend their support and expertise during the discussion; you want to have the votes when you go into the room.

Conducting Interviews

Being able to interview people, establish a professional relationship of trust and mutual respect with them, and use their knowledge and viewpoints to inform your own understanding is a critical skill for ramping up quickly. No one will call this an interview unless you are working for a consulting firm, but this is exactly what it is. You are interviewing them for information they have that you need.

Interviews are a way to get the big picture and find out the interesting topics more quickly, but they also come in a wrapper of everyone’s individual agendas and biases. Even the most scrupulous, well-meaning professional has some unconscious bias and self-interest.

Contact them in advance to schedule a time to talk. Tell them enough that they understand why you want to talk to them and who you are if you have not met. If they have an administrative assistant or other process, try to follow it to be considerate.

Introduce yourself properly, and be direct about the goals you have for the interview. Let them know what is or is not going to be treated as confidential or only used anonymously. When you wrap up, make sure to thank them for their time and let them know how to contact you if they think of something later they want to share.

Listen more than you talk! If you remember nothing else, retain this, please. Ask open-ended questions and resist the urge to chat or talk very much. Use active listening methods like verbal acknowledgement, eye contact, positive body language, and restating what you have heard for clarity.

Do not record them without permission. I generally take notes rather than record meetings because recording tends to make people less comfortable, which can interfere with getting to the truth about things as easily.

Think about the trust and professional relationship you want to establish or strengthen, not just the immediate objectives. You want to make allies rather than enemies whenever possible, but do not be disingenuous or obsequious. People will smell it. Be authentic, but do not focus too much on the short-term over the long-term. You want people to respect you and be willing to share important information with you that may be difficult or contentious.

Suggested Questions for Interviews

These are some specific questions that I find useful.

  • What are you focused on currently?
  • What are the top priorities for your team in the upcoming quarter?
  • What is your career background?
  • What do you think our key opportunities are?
  • What do you think the biggest challenges are to the team, project, or business?
  • What scares you most about this product, project, launch, or situation?
  • What do you think should be the highest priority right now?
  • How are things going operationally?
  • How is your work-life balance?
  • What are your most and least favorite parts of your role?
  • Who else should I be talking to that I might not have already?
  • What documents would you recommend I read?
  • What did I not ask you that I should have?

Drafting and discuss your conclusions

Draft a report on your conclusions. Depending on whether or not you have to present a specific type of document already, you can adapt this to it. If needed, generate a document tailored to support the process and structure it appropriately. You need to clearly explain your goals, methods, evidence, and conclusions.

Finalize and Alignment

Use this draft document to gather feedback from others. Do they agree with your conclusions? Are there any specific concerns regarding the methods used or the reliability of your information sources? Do they think you have missed something? Is your presentation clear? This has two goals. The primary aim is to seek the truth by collecting a range of viewpoints and constructive criticism to ensure a comprehensive understanding. The second objective is to establish alignment and consensus on your findings. This ensures that when you communicate them widely and propose actions, you have the necessary support to implement them effectively.

Key Takeaways

  1. Define clear goals to guide your onboarding process.
  2. Gain broad context before diving into specifics.
  3. Use a structured research approach, including interviews and document review.
  4. Evaluate the reliability and potential bias of information sources.
  5. Identify organizational controversies and knowledge gaps.
  6. Leverage tech tools to accelerate learning.
  7. Avoid common pitfalls like sticking to familiar areas or rushing to action.
  8. Plan your onboarding with specific milestones and deadlines.
  9. Listen actively during interviews and ask open-ended questions.
  10. Seek stakeholder alignment on your findings and conclusions.


Continuous learning and growth are essential for success in our fast-moving technological landscape. Ramping up in a new role can feel overwhelming. Use a structured approach and define clear goals to guide your learning and help you reach a working understanding of the systems, processes, and people involved more quickly. Combining information sources and collaboration with key stakeholders will accelerate the process.

As you begin next leadership role or challenge, put these strategies into action. Start by clearly defining your goals and creating a structured plan for your first 30, 60, and 90 days. Engage with your team, conduct informative interviews, and dive into the most relevant documentation.

Effective onboarding is not just about personal success—it’s about quickly positioning yourself to add value to your team and organization. Share your experiences and lessons learned with your coworkers or the broader community. What strategies have worked best for you? What challenges have you faced?


Covey, S. R. (2013). 7 Habits of Highly Effective People: 25th Anniversary Edition. United States: Turtleback.

Five Ws. (2024, June 2). Wikipedia. https://en.wikipedia.org/wiki/Five_Ws

Max. (2023, September 21). Consulting Discovery Phase: Guidelines and Resources. Management.Org. https://management.org/consulting/discovery.htm

Rasiel, E., Rasiel, E. M., and Friga, P. N. (2001). McKinsey Mind. United States: McGraw Hill LLC.

Stareva, I. (2018, May 31). “8-Step Framework to Problem-Solving from McKinsey - Iliyana Stareva - Medium. Medium.” https://medium.com/@IliyanaStareva/8-step-framework-to-problem-solving-from-mckinsey-506823257b48

Thiel, P., and Masters, B. (2014). Zero to One: Notes on Startups, or How to Build the Future. United States: Crown.

Watkins, M. (2013). The First 90 Days, Updated and Expanded: Proven Strategies for Getting Up to Speed Faster and Smarter. United States: Harvard Business Review Press.

Young, S. H. (2024). Get Better at Anything: 12 Maxims for Mastery. HarperCollins UK.

Appendix A: Sources

Below is a list of possible information sources you can use to learn more about the people, processes, products, and systems relevant to your role. The availability and applicability will vary, and which ones you should prioritize also depend on your goals and the situation.


  • Interviews with leaders, team members, and subject-matter experts
  • Terminology glossaries
  • Process and policy documents


  • Compliance requirements documents, audits, or checklists
  • Business metrics, analytical dashboards, and reports
  • User analytics (products like Segment or Google Analytics)
  • UI and UX designs, wireframes, and mockups (and use the product itself when available).
  • Prior product/feature launch trackers and checklists.


  • Project planning documents, issue backlogs, and roadmaps
  • Risk management analysis
  • Sprint planning artifacts


  • Engineering/technical specifications
  • System design documents
  • Architecture design records (ADRs)
  • Architecture diagrams
  • RFCs
  • Engineering review minutes
  • Source code: use modern IDEs and other tools like Cody to understand quickly.
  • Pull Requests/Code Reviews
  • API definitions (with tools or by looking at definitions like gRPC protobuf definitions or OpenAPI)
  • Data schemas in databases, whether SQL or noSQL
  • Data dictionaries
  • Test coverage and CI-run reports
  • Unit and integration tests, end-to-end tests, synthetic tests
  • Deployment automation pipelines and release processes

Users and UX

  • Customer support tickets or requests
  • User experience research reports
  • User research, user interviews
  • Customer feedback and reviews


  • Postmortems
  • Trouble ticket and automated alert history
  • Operational/production readiness reviews
  • Run books and playbooks.
  • Threat models and security reviews
  • Monitoring dashboards
  • SLO, SLA, and SLI definitions
  • Observability data such as traces, metrics, logs, real-user monitoring, and client-side instrumentation

Appendix B: Due Diligence

Here are some suggested areas to ensure you cover all the important aspects when trying to gain a working understanding of a new area or organization.

Organizational, customer, business, and product context

  • What is the organization’s mission?
  • How do they plan and track work?
  • What is on their roadmap?
  • How is the organization structured?
  • What is going well? And poorly?
  • What are the key products and/or services they provide?
  • Who is the target market?
  • What are the primary use cases and user journeys being addressed?
  • Who are the key competitors?

People and Teams

  • Who are the key players?
  • What does the org chart look like?

Design and Decisions

  • Is there a process for reviewing product, technical, and engineering plans and specifications?
  • What documentation is available on the current system’s design and operations?
  • Are notes or minutes kept? Are decisions documented for future reference?

Risk Management

  • Are there business continuity plans?
  • Are their runbooks or playbooks for operational issues?
  • Are there documented procedures for possible security incidents?
  • Is there a common standard and set of expectations for incident management procedures?
  • Are their auditing mechanisms in place for security, compliance, privacy, and other concerns?

Data Handling

  • Where is the data stored?
  • Is data encrypted at rest? In Transit?
  • How are encryption keys managed?
  • What controls are in place for PII?
  • How are backups handled?
  • Is there a business continuity plan?
  • Are there data handling laws such as GDPR, SOX, and PCI-DSS?

Production and Operations

  • Test Coverage
  • Automated Testing
  • Continuous Integration
  • Releases
  • Deployments
  • Infrastructure (IaaS, cloud provisioning, etc.)
  • Automation of operational tasks
  • Technical documentation
  • User documentation
  • Experiments and dark launches
  • Feature flags and A/B tests

Customer and public sentiment

  • If a system has external users, check the discussion on external forums and social media: X, LinkedIn, Reddit, Quora, Medium, StackOverflow, and other industry- or topic-specific forums as appropriate.
  • Customer reviews and trouble tickets are also useful sources.