I became a project manager in a game company in 2021 and witnessed how, within a single company, production processes evolved in parallel with the rapid emergenceI became a project manager in a game company in 2021 and witnessed how, within a single company, production processes evolved in parallel with the rapid emergence

Adapting to AI: Insights from a Project Manager in Game Development

I became a project manager in a game company in 2021 and witnessed how, within a single company, production processes evolved in parallel with the rapid emergence of new AI tools — from entirely manual workflows to significant portions of code being validated or even generated by neural networks.

Major game studios are reacting to these shifts by forming partnerships with dedicated AI contractors who now offer turnkey solutions for production support.

This led me to consider whether the current enthusiasm surrounding AI assistants is justified, and whether any of these tools could meaningfully transform my own profession as a producer or project manager.

This experiment is divided into two parts, each addressing a separate direction:

  1. Generative AI: Optimizing art production pipelines using LLM systems. The article focuses on this first topic you are reading right now.
  2. Agentic AI: Streamlining production workflows by constructing a custom AI agent intended to support or partially replace the PM/producer role. This will be discussed in the next article.

In both cases, I will compare the results with outcomes that my team and I previously achieved manually in equivalent real-world scenarios. The full case study that I will use for comparison is available here. However, I will reference the key concepts relevant to this research directly throughout the text.

As I finish writing this introduction, I still do not know what conclusions I will reach by the end of the study — but I intend to present all findings transparently.

No stone-throwing, please. Share your thoughts instead.

The Task

The task was straightforward: within an ongoing AAA game production, a cosmetics department did not yet exist, and it needed to be created from scratch. At the same time, we were expected to optimize the existing art pipeline and ultimately deliver three artefacts:

  • a new art production roadmap;
  • a redesigned Jira task structure (with descriptions and dependencies);
  • updated task estimations that would allow the team to meet the publishing requirements.

Not knowing where to begin, I started with the simplest step — defining roles.

Picture 02. Defining rolesImmediately after this, the newly introduced assistant requested the following data:

| | WHAT? | WHY? | |----|----|----| | Overall project context (decision framework) | Project typeProject stageKey milestonesQuality / visual target | Without this, it’s impossible to understand what can be optimized and where gains can come from. | | Current art team structure | Team composition (roles)Team skill levelGeography / time zonesResource availability | Necessary to identify capacity limits and bottlenecks. \n | | Current art production pipeline (As-Is) | How art is currently produced (step by step)Who owns each stepWhere delays occurHow approvals are handled | The key foundation for any meaningful optimization. | | Current roadmap (As-Is) | Existing roadmapHow it was built (content-driven vs. capacity-driven)What currently does not work | Without this, it’s impossible to propose a correct new roadmap. | | Task structure (As-Is) | How tasks are currently created and formattedExample tasksDependencies between tasks | This is the foundation for restructuring. \n | | Estimations and factual data | Planned estimatesActual execution data (last 2–3 months)Frequency of rework | The most valuable input for producing accurate estimates. | | Studio constraints and policies | What cannot be changedWhat can be changed | To avoid proposing unrealistic changes. | | Optimization goals (Very important) | ⏱ Speed up production \n 🎯 Increase predictability \n 💸 Reduce cost \n 🧠 Reduce load on leads \n 🔁 Decrease the amount of rework | The primary optimization objectives. |

We declined to specify individual team members. The primary goal was to determine whether the model could optimize the pipeline itself using only the structural information already available, without relying on team size. After all, if you have a team of one hundred people, they can execute even a poorly structured workflow simply through brute force.

Once the information and the main objective — doubling the cosmetics delivery volume by 2027 — were provided, the assistant summarized:

Picture 03. Existing pipeline reviewWith that — and a pessimistic note from the model, “This pipeline can’t be optimized. It needs to be changed” — we proceeded to the next phase.

Artefacts

New Art Roadmap

The model produced the following formulation of our high-level production plan and the corresponding pipeline update, which it deemed optimal:

 Picture 05. High-level production planAfter clarifying the complexity tiers of cosmetic items, it also proposed an optimal production timeline aligned with the expected volume of assets and delivery deadlines.

| Tier | Classification | Tempo must be achieved | |----|----|----| | Tier A | 80-99% changes of geometry | 7-8 weeks | | Tier B | 40-50% changes of geometry | <6 weeks | | Tier C | 0% geometry changes, 100% textures | 2 per month |

Following several iterations and refinement questions—during which the model, as usual, expressed enthusiastic appreciation for the quality of the queries—we arrived at a redesigned art pipeline for TEAR A SKIN:

Just as an example, here is what the pipeline we created manually looked like:

Positive observations:

  • The model independently decomposed the pipeline into logical production phases.
  • It identified task interdependencies, which I later visualized in Miro via connectors.
  • It proposed key “bottleneck tasks,” many of which we had previously struggled with.
  • For each task, it specified the expected output, the task owner, and the reviewer—elements without which the process cannot advance.

Negative observations:

  • Naming conventions were suboptimal (although easily adjustable to internal standards).
  • Some tasks appeared better suited as subtasks—and vice versa—yet this was readily correctable manually.

Overall, the resulting structure was a strong foundation for moving on to the next stage.

JIRA

The model then proposed a Jira configuration structured as follows. \n

Positive observations:

  • It assigned responsible roles autonomously for each task.
  • It formulated acceptance criteria without prompting.
  • It indicated the previous stage to which work should return if an approval step failed.
  • It specified what must be prepared before the task can begin.

Estimation

This was the point where a real disagreement emerged. In the first iteration, the model proposed the following estimates

However, these numbers felt overly optimistic — or, more precisely, unrealistic. One day allocated for testing and ten days for a final mesh seemed insufficient. Additionally, the overall production estimate was approximately 1.5× lower than our internal calculations: 5.5 weeks instead of 8 weeks.

I suspected this discrepancy stemmed from the abstract nature of the original request. I therefore attempted to refine the question without violating NDA constraints — by identifying a publicly available, widely known reference that closely resembled our real production case.

This resulted in a revised estimation table:

At this point, the estimates were already very close to our internal calculations.

The remaining issue was the lack of a unified estimation formula. The calculations implicitly assumed a “standard” bipedal mech. But what happens when dealing with titans of different sizes or proportions? What if certain units contain significantly more panels or modular components? We needed a way to compare disparate meshes against a shared baseline.

“Could we simply take the in-engine volume of a skin and link estimations to that metric?” I asked.

“Production time does not correlate with surface area,” the model replied, \n “but with the number of design decisions per unit of surface.”

It then proposed two steps:

1. Formalizing the concept of BTSU.

1.0 BTSU = BT-level shell, which has:

✔ ~80–100k tris (LOD0) \n ✔ ~25–40 separate elements \n ✔ ~30 m² of surfaces (conditional) \n ✔ ~3–5 materials, 4k textures \n ✔ photoreal detail density

In other words: BTSU = a normalized amount of work calibrated to real BT-level complexity (tris, elements, materials, detail density), not just surface area.

2. Introducing a Simple BTSU Estimation Formula (for Leads).

The coefficients used in this formula raised concerns, so I asked the model to explain them in detail.

Next, I asked the model—using the already established structure—to separately calculate the production cost and timeline for a Tier A weapon for a similar skin (this was intended to be a standalone feature for that level of cosmetics). The model first calculated each production stage individually:

Then validated the result using the derived formula:

Finally, it produced the total estimated time required to deliver a Tier A skin with an accompanying weapon.

Results

As a result of this work, I obtained three mandatory artifacts generated with the assistance of the model:

  • a new Art Roadmap;
  • a standardized Epic and task template for Jira;
  • production estimates that only required adding a 15% contingency buffer.

In addition to these, the model proposed two supplementary artifacts during the process:

  • the BTSU estimation formula;
  • a tool concept titled “Titan Armor Production Templates,” intended to reduce production time (This is particularly notable because when we later redesigned this pipeline manually, the first extension we began developing was precisely such a library of reusable templates and previously created assets).

And most importantly - we received a new goal for ourselves.

At this point, the only remaining step was to run several test productions and analyze what caused the discrepancy between three months and 4.8 months of latest production time—and to optimize it where possible (acknowledging that every production has its own constraints, including technical ones). This will be the subject of the next article.

Instead of a conclusion

This section does not represent a definitive conclusion for the entire study. However, I believe the following data is important to share.

The manual version of this work involved four team members: a producer, a project manager, an art lead, and a principal 3D artist. While the producer’s role was more flexible, the remaining participants each had two responsibilities: analyzing existing data and proposing their own solutions.

Below is the amount of time the Art Lead spent on just one of these tasks:

89 hours corresponds to approximately 11 working days.

Roughly the same amount of time was spent by the Project Manager and the Principal Artist on similar tasks within their respective domains. This means that a single optimization task consumes an entire sprint collectively. And this was only one of two tasks assigned to one developer out of three.

In total, the optimization effort required approximately one full month of work.

And this is still a relatively positive outcome. Many production optimization efforts or R&D initiatives are known to take three to six months to research and implement.

Let us reference publicly available data from gov.uk and consider an average salary in the industry.

Time spent: \ One R&D task →89 hours**(≈ one sprint / two weeks) \n Each developer had two such tasks →≈ one month of work per person

Salary reference (UK national data): \ £49,000 per year → ≈ £3,200 per month × 3 people =£9,600 \ Thus, the company spent approximately£9,600 on this plan alone.

However, the actual loss is effectively doubled. While engaged in this work, these specialists were not performing their primary responsibilities: the lead was not leading the team, the artist was not producing assets, and the producer/PM could not plan upcoming features because the team was busy untangling current production challenges. In effect, this results in an estimated £20,000 cost per month.

If this figure does not seem particularly high, it is important to remember that large AAA projects often consist of 7–10 departments, each of which may engage in R&D initiatives or pipeline optimization at least once per quarter—often with far less structured data than in this case.

If each department spends even a single month per year on such initiatives using exclusively human labor (and departments such as tech or tools often require even more time), the annual cost can easily exceed £140,000, excluding opportunity cost related to delayed development. For very large companies, this may be negligible. For most others, it is not.

Alright, alright — maybe I’m pushing it a bit. Let’s calculate this not in money, but in working time.

The work demonstrated in this case took approximately three hours in total:

  • one hour for the initial pass,
  • one hour refining missing elements,
  • and one hour for polishing.

In fact, I spent more time writing this article than performing the optimization using an LLM-based system.

In many cases, the model proposed changes that we later implemented manually. The main effort required was translating its suggestions into terminology and structures approved within the project. This step would likely require no more than an additional day. After that, only lead review was needed—typically one to two days, depending on availability.

Two days of focused work instead of a full month.

With that, I conclude this part of the study. The next article will explore how AI agents can be used to optimize daily team routines—again, compared directly against manual workflows.

\ \

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03832
$0.03832$0.03832
+0.02%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Atlassian’s Monumental DX Acquisition: Revolutionizing Developer Productivity for a Billion-Dollar Future

Atlassian’s Monumental DX Acquisition: Revolutionizing Developer Productivity for a Billion-Dollar Future

BitcoinWorld Atlassian’s Monumental DX Acquisition: Revolutionizing Developer Productivity for a Billion-Dollar Future In a move that sends ripples across the tech industry, impacting everything from foundational infrastructure to the cutting-edge innovations seen in blockchain and cryptocurrency development, productivity software giant Atlassian has made its largest acquisition to date. This isn’t just another corporate buyout; it’s a strategic investment in the very fabric of how software is built. The Atlassian acquisition of DX, a pioneering developer productivity platform, for a staggering $1 billion, signals a profound commitment to optimizing engineering workflows and understanding the true pulse of development teams. For those invested in the efficiency and scalability of digital ecosystems, this development underscores the growing importance of robust tooling at every layer. Unpacking the Monumental Atlassian Acquisition: A Billion-Dollar Bet on Developer Efficiency On a recent Thursday, Atlassian officially announced its agreement to acquire DX for $1 billion, a sum comprising both cash and restricted stock. This substantial investment highlights Atlassian’s belief in the critical role of developer insights in today’s fast-paced tech landscape. For years, Atlassian has been synonymous with collaboration and project management tools, powering teams worldwide with products like Jira, Confluence, and Trello. However, recognizing a growing need, the company has now decisively moved to integrate a dedicated developer productivity insight platform into its formidable product suite. This acquisition isn’t merely about expanding market share; it’s about deepening Atlassian’s value proposition by providing comprehensive visibility into the health and efficiency of engineering operations. The strategic rationale behind this billion-dollar move is multifaceted. Atlassian co-founder and CEO Mike Cannon-Brookes shared with Bitcoin World that after a three-year attempt to build an in-house developer productivity insight tool, his Sydney-based company realized the immense value of an external, existing solution. This candid admission speaks volumes about the complexity and specialized nature of developer productivity measurement. DX emerged as the natural choice, not least because an impressive 90% of DX’s existing customers were already leveraging Atlassian’s project management and collaboration tools. This pre-existing synergy promises a smoother integration and immediate value for a significant portion of the combined customer base. What is the DX Platform and Why is it a Game-Changer? At its core, DX is designed to empower enterprises by providing deep analytics into how productive their engineering teams truly are. More importantly, it helps identify and unblock bottlenecks that can significantly slow down development cycles. Launched five years ago by Abi Noda and Greyson Junggren, DX emerged from a fundamental challenge: the lack of accurate and non-intrusive metrics to understand developer friction. Abi Noda, in a 2022 interview with Bitcoin World, articulated his founding vision: to move beyond superficial metrics that often failed to capture the full picture of engineering challenges. His experience as a product manager at GitHub revealed that traditional measures often felt like surveillance rather than support, leading to skewed perceptions of productivity. DX was built on a different philosophy, focusing on qualitative and quantitative insights that truly reflect what hinders teams, without making developers feel scrutinized. Noda noted, “The assumptions we had about what we needed to help ship products faster were quite different than what the teams and developers were saying was getting in their way.” Since emerging from stealth in 2022, the DX platform has demonstrated remarkable growth, tripling its customer base every year. It now serves over 350 enterprise customers, including industry giants like ADP, Adyen, and GitHub. What makes DX’s success even more impressive is its lean operational model; the company achieved this rapid expansion while raising less than $5 million in venture funding. This efficiency underscores the inherent value and strong market demand for its solution, making it an exceptionally attractive target for Atlassian. Boosting Developer Productivity: Atlassian’s Strategic Vision The acquisition of DX is a clear signal of Atlassian’s strategic intent to not just manage tasks, but to optimize the entire software development lifecycle. By integrating DX’s capabilities, Atlassian aims to offer an end-to-end “flywheel” for engineering teams. This means providing tools that not only facilitate collaboration and project tracking but also offer actionable insights into where processes are breaking down and how they can be improved. Mike Cannon-Brookes elaborated on this synergy, stating, “DX has done an amazing job [of] understanding the qualitative and quantitative aspects of developer productivity and turning that into actions that can improve those companies and give them insights and comparisons to others in their industry, others at their size, etc.” This capability to benchmark and identify specific areas for improvement is invaluable for organizations striving for continuous enhancement. Abi Noda echoed this sentiment, telling Bitcoin World that the combined entities are “better together than apart.” He emphasized how Atlassian’s extensive suite of tools complements the data and information gathered by DX. “We are able to provide customers with that full flywheel to get the data and understand where we are unhealthy,” Noda explained. “They can plug in Atlassian’s tools and solutions to go address those bottlenecks. An end-to-end flywheel that is ultimately what customers want.” This integration promises to create a seamless experience, allowing teams to move from identifying an issue to implementing a solution within a unified ecosystem. The Intersection of Enterprise Software and Emerging Tech Trends This landmark acquisition also highlights a significant trend in the broader enterprise software landscape: a shift towards more intelligent, data-driven solutions that directly impact operational efficiency and competitive advantage. As companies continue to invest heavily in digital transformation, the ability to measure and optimize the output of their most valuable asset — their engineering talent — becomes paramount. DX’s impressive roster of over 350 enterprise customers, including some of the largest and most technologically advanced organizations, is a testament to the universal need for such a platform. These companies recognize that merely tracking tasks isn’t enough; they need to understand the underlying dynamics of their engineering teams to truly unlock their potential. The integration of DX into Atlassian’s ecosystem will likely set a new standard for what enterprise software can offer, pushing competitors to enhance their own productivity insights. Moreover, this move by Atlassian, a global leader in enterprise collaboration, underscores a broader investment thesis in foundational tooling. Just as robust blockchain infrastructure is critical for the future of decentralized finance, powerful and insightful developer tools are essential for the evolution of all software, including the complex applications underpinning Web3. The success of companies like DX, which scale without massive external funding, also resonates with the lean, efficient ethos often celebrated in the crypto space. Navigating the Era of AI Tools: Measuring Impact and ROI Perhaps one of the most compelling aspects of this acquisition, as highlighted by Atlassian’s CEO, is its timely relevance in the era of rapidly advancing AI tools. Mike Cannon-Brookes noted that the rise of AI has created a new imperative for companies to measure its usage and effectiveness. “You suddenly have these budgets that are going up. Is that a good thing? Is that not a good thing? Am I spending the money in the right ways? It’s really, really important and critical.” With AI-powered coding assistants and other generative AI solutions becoming increasingly prevalent in development workflows, organizations are grappling with how to quantify the return on investment (ROI) of these new technologies. DX’s platform can provide the necessary insights to understand if AI tools are genuinely boosting productivity, reducing bottlenecks, or simply adding to complexity. By offering clear data on how AI impacts developer efficiency, DX will help enterprises make smarter, data-driven decisions about their AI investments. This foresight positions Atlassian not just as a provider of developer tools, but as a strategic partner in navigating the complexities of modern software development, particularly as AI integrates more deeply into every facet of the engineering process. It’s about empowering organizations to leverage AI effectively, ensuring that these powerful new tools translate into tangible improvements in output and innovation. The Atlassian acquisition of DX represents a significant milestone for both companies and the broader tech industry. It’s a testament to the growing recognition that developer productivity is not just a buzzword, but a measurable and critical factor in an organization’s success. By combining DX’s powerful insights with Atlassian’s extensive suite of collaboration and project management tools, the merged entity is poised to offer an unparalleled, end-to-end solution for optimizing software development. This strategic move, valued at a billion dollars, underscores Atlassian’s commitment to innovation and its vision for a future where engineering teams are not only efficient but also deeply understood and supported, paving the way for a more productive and insightful era in enterprise software. To learn more about the latest AI market trends, explore our article on key developments shaping AI features. This post Atlassian’s Monumental DX Acquisition: Revolutionizing Developer Productivity for a Billion-Dollar Future first appeared on BitcoinWorld.
Share
Coinstats2025/09/18 21:40
China Bans Nvidia’s RTX Pro 6000D Chip Amid AI Hardware Push

China Bans Nvidia’s RTX Pro 6000D Chip Amid AI Hardware Push

TLDR China instructs major firms to cancel orders for Nvidia’s RTX Pro 6000D chip. Nvidia shares drop 1.5% after China’s ban on key AI hardware. China accelerates development of domestic AI chips, reducing U.S. tech reliance. Crypto and AI sectors may seek alternatives due to limited Nvidia access in China. China has taken a bold [...] The post China Bans Nvidia’s RTX Pro 6000D Chip Amid AI Hardware Push appeared first on CoinCentral.
Share
Coincentral2025/09/18 01:09
UWRO President Nail Saifutdinov: Digital Solutions for Faith Communities and Remembrance Services—Under One International Foundation

UWRO President Nail Saifutdinov: Digital Solutions for Faith Communities and Remembrance Services—Under One International Foundation

UWRO (United World Religions Organization) is an international faith tech foundation working at the intersection of technology, media, and social impact. It creates
Share
Techbullion2025/12/26 20:19