Creativity, Cost, and Context: Competing with AI as a Software Engineer

Milhan Kim

On creativity, cost, and context—how human software engineers can stay valuable in an era when AI is fast, accurate, and scalable


From talk to senior students at CAU, Seoul, 2024-11-19.

Introduction

Not long ago, artificial intelligence was the secret cheat-code for students hoping to pass coding assignments. Today, in an ironic twist, that same AI has become a reason some companies hesitate to hire those students at all. If an AI co-pilot can generate decent code in seconds, why bring on a junior developer who might take days? This shift raises a fundamental question for anyone in software engineering: What is creativity, and what does it mean to be valuable as a human engineer when AI is so fast, accurate, and scalable?

In this article, I reflect on the changing power dynamics of knowledge, experience, and productivity in the age of AI. We’ll explore sharp arguments around creativity (hint: it’s not chaos), examine why novelty is not noise, and argue that great engineering is as much about economics and context as it is about code. The goal is to provoke deeper thought on how to compete with and not just against AI. This perspective comes from my experience as a senior engineer and interviewer who has watched AI reshape our industry’s expectations for entry-level and experienced roles alike.

Knowledge in the Age of AI: The Shift in Power Dynamics

Knowledge used to be power—seasoned engineers held deep expertise and juniors learned by slowly absorbing it. Now, much of that explicit knowledge is ubiquitously accessible. Ask a question on Stack Overflow or feed a prompt to an LLM, and you might get an answer in seconds that previously required tapping a senior colleague or spending days in documentation. Today, a motivated junior can quickly learn how to do something via AI or online resources without leaning on team veterans for every detail.

Yet this democratization of knowledge doesn’t automatically level the playing field. In fact, it changes the power dynamics in unexpected ways. Experience still matters, perhaps even more, because experience isn’t just knowing facts – it’s knowing context, what not to do, and why things are the way they are. A large language model can regurgitate best practices or API usage, but it won’t instinctively know which practices actually fit your situation. This is where experienced engineers leverage AI as a force multiplier. Many senior engineers tell me that with AI-assisted coding, they feel like they’re working with a squad of junior developers on demand, but without the overhead of mentoring each one from scratch. In contrast, a less experienced developer might generate code easily with AI but struggle to validate or integrate it, because they lack the big-picture understanding.

The result is a shifting landscape of productivity. A senior engineer wielding AI can potentially outproduce an entire junior team on raw code output. Management notices when one veteran with Copilot delivers what used to require a handful of entry-level devs. This raises the bar for what a human newcomer must contribute. It’s not that knowledge itself has lost value—it’s that everyone has it now. What differentiates engineers is moving up Bloom’s taxonomy: not just remembering or understanding facts (AI does that), but applying, analyzing, and evaluating in context. In short, human engineers increasingly prove their worth not by the knowledge they carry, but by how they use knowledge creatively and contextually.

Creativity is Not Chaos (and Novelty Is Not Noise)

Let’s address the buzzword that everyone clings to as their saving grace: creativity. We often reassure ourselves that humans will remain relevant because we’re creative and AI is just combinatorial. But creativity in engineering is often misunderstood. Creativity is not just randomness or wild experimentation; true creativity is novelty with purpose. In other words, a creative solution is one that is both original and valuable in solving a problem. Novelty without value is just noise.

AI, under the right conditions, can surprise us with seemingly creative outputs. Remember DeepMind’s AlphaGo, which played Go in ways no human had seen, or GPT-based models that refactor code into an ingenious one-liner? These cases show that given a well-defined problem and vast training, an AI can produce strategies or code that appear innovative. However, there’s a caveat: AI’s “creativity” is constrained by the data and goals we give it. AlphaGo was creative within the fixed rules of Go – it wasn’t inventing a new game, it was discovering unanticipated moves to achieve the explicit objective of winning. In software, an AI might generate a novel implementation of a feature, but only within the bounds of patterns it learned from existing codebases and the requirements we specify in the prompt.

Human creativity in engineering starts from a different place: the ambiguity and messiness of the real world. We often have to define the problem itself, not just solve a given one. That’s where true innovative leaps happen – questioning the problem, reframing requirements, or merging ideas from different domains. A human engineer might recognize that a user’s need could be met with a completely different approach than initially imagined, or that a small change in assumption opens up an elegant solution. This kind of creativity—finding valuable new questions to ask or novel approaches that aren’t in any playbook—is where humans excel. It’s not chaotic ideation for its own sake; it’s insight born of understanding context and purpose.

So yes, AI can generate a hundred variations of a login form or suggest an optimization trick it pieced together from thousands of GitHub projects. But deciding which of those variations truly serves the users, or imagining a solution that isn’t in the dataset at all, remains a human strength. As engineers, we shouldn’t retreat to a vague notion of “creativity” as just artistic originality. We should hone a creativity rooted in deep awareness—what one might call informed imagination. It’s the kind of creativity where you propose a design that’s never been tried, but you can reason about why it just might work. That’s not something an autocomplete-style AI achieves easily, because it requires stepping outside of established patterns. Our value lies in creative thinking that is grounded in reality, not in churning out random new ideas hoping one sticks.

Engineering is Applied Economics, Not Idealized Science

Another provocative statement: engineering is not science. This isn’t to downplay engineering’s technical rigor, but to highlight that engineering success is measured in outcomes and trade-offs, not just technical perfection. In many ways, software engineering is the art of making optimal compromises under real-world constraints. It’s where computer science theory meets the unforgiving realities of budgets, timelines, and maintenance costs. In short, engineering is applied economics.

What does this mean in the context of AI? It means that the best code isn’t necessarily the most elegant algorithm or the one that uses the fanciest new framework—it’s the code that appropriately balances cost and benefit for the problem at hand. An AI, however, doesn’t have an innate sense of economics or context; it will happily generate a complex solution that technically meets the requirements you typed in, even if a far simpler approach would suffice (or even if the best solution is to write no new code at all). How often does an AI tell you “actually, you don’t need to build this feature” or “maybe we can solve this with a configuration change instead of code”? Virtually never. It’s optimized to produce something when asked, whereas a seasoned engineer knows that sometimes the smartest engineering decision is to not write new code (for instance, reusing an existing tool, simplifying a requirement, or just avoiding a risky feature altogether).

In practice, human engineers bring an economic lens to decisions:

  • We consider cost of complexity: If AI suggests a clever but convoluted architecture, we weigh the long-term maintenance burden versus the immediate gains.
  • We think about return on investment: Is implementing this feature going to bring enough value to justify the engineering effort? AI isn’t going to proactively raise that question.
  • We align solutions with business goals and constraints: A solution that’s technically optimal but financially or organizationally impractical is no solution at all. Engineers often have to say, “This approach is overkill—here’s a simpler alternative that meets our needs at a fraction of the cost.”

This economic mindset is where human judgment outshines AI’s pattern-matching. As an example from my own career: I’ve seen AI-generated suggestions that would indeed solve a given coding problem, but would also introduce an external library with a heavy license fee or add infrastructure that our small team couldn’t realistically support. The AI had no way of knowing those external constraints, because they weren’t in the prompt. A senior engineer, on the other hand, is constantly evaluating the broader picture—Does this make money? Does it save money? Does it carry hidden costs or risks? Those are questions at the intersection of technology and economics. In the future, the engineers who thrive will be the ones who treat every technical decision as a business decision too. They’ll use AI to generate options, certainly, but then apply a keen filter of cost, risk, and benefit to choose the path that makes the most sense. That’s something an “idealized science” perspective often misses—engineering is about finding a good-enough solution that maximizes value, not pursuing an ivory-tower ideal regardless of expense.

Patterns vs. Consequences: Code in Context

Perhaps the biggest gap between what AI does and what human engineers do lies in context. When an AI code assistant writes code, it’s fundamentally pushing symbols that fit patterns it learned from training data. It treats code as patterns to complete. Human engineers, by contrast, must treat code as something that has consequences. Every line we write eventually runs on real systems, interacts with other code, and lives within an organization’s ecosystem of tools, policies, and obligations. We can’t afford to see code as context-free.

Consider some of the real-world contexts and consequences that humans must account for, which AI typically ignores unless explicitly told:

  • Legal and Licensing: If ChatGPT suggests using an open-source library under a viral license (like GPL) and you blindly include it, you might be putting your company’s proprietary code at risk. AI won’t warn you about software licenses or patent encumbrances unless you ask. Engineers have to know what’s permissible or run it by legal teams.
  • Security and Privacy: AI can generate a login implementation or a data processing script, but it won’t inherently know your organization’s security policies or privacy regulations. Is the code storing personal data securely? Is it exposing a vulnerability? These concerns require contextual awareness that comes from experience and often from humans in the loop (think of all the internal security guidelines AI has never read).
  • Infrastructure and Scalability: Code doesn’t run in a vacuum. An AI might produce a solution that works for 100 users, but will it work for 100,000? Engineers understand the infrastructure context—database load, network latency, memory constraints—and design solutions that won’t fall over in production. AI will happily return an O(n²) algorithm if it’s common in its training data, not knowing that for your use case n is a million and that solution will timeout. Oh but wait, are we actually happy with O(n!) algorithm for now?
  • Organizational and Cultural Constraints: Every company has unwritten rules and conventions. Maybe there’s an approved tech stack, or perhaps using a certain cloud service is off-limits due to prior bad experiences. AI doesn’t read the room; it won’t know that your team’s DevOps folks hate that one library it keeps suggesting, or that half your codebase is legacy for a reason (e.g. regulatory compliance). Human engineers navigate these soft constraints daily.

In my own work, I’ve encountered situations where an AI-generated code snippet technically solved the immediate problem but introduced a downstream headache. One memorable case: an AI suggested using ffmpeg for video processing in an embedded system. Sure, it worked—until our compliance team flagged it because including that library would oblige us to open-source parts of our product (due to LGPL licensing). The AI had no clue; it just saw a pattern that others have used. It was my job to foresee the consequence of that choice. In another instance, a junior engineer used an AI to generate a data analysis script that worked on a sample dataset. When we ran it in production, it crashed the server – the approach couldn’t handle our real data scale. No surprise: the AI didn’t have our production context, it just provided a generic solution.

The takeaway is that code is only as good as the context considered. Humans remain the stewards of context. We live with the code after it’s written. We debug it at 3 AM when something goes wrong (and debugging itself is an exercise in reconstructing context: logs, state, user behavior – things you can’t always stuff into a prompt). We maintain it months or years later, when business requirements change or when a new law compels a rewrite of some module. AI is a phenomenal pattern machine, but it has no skin in the game for the consequences. As a software engineer aiming to stay valuable, one of your superpowers is to be the guardian of context – to ask “what then?” for every suggestion the AI gives, and to consider aspects that aren’t explicitly in the spec but are very real (like security, scalability, legality, and interoperability). This ability to internalize the broader context and foresee consequences is a distinctly human form of diligence that complements AI’s raw coding ability.

Will AI Replace Junior Engineers? It’s Complicated.

A common refrain in industry talks and LinkedIn posts these days is: “AI will replace junior developers.” There’s truth in that statement, but it’s an oversimplification that deserves a closer look. Many routine programming tasks that used to be the proving grounds for entry-level engineers can now be automated or accelerated by AI. Need to write a boilerplate CRUD API, a unit test suite for a model class, or a script to parse log files? An LLM can crank those out faster than a new hire ramping up on the codebase. It’s no wonder some engineering managers muse that they can hire one less junior dev if they budget for an AI tool subscription instead.

The economics are stark: an AI coding assistant might cost on the order of $20 per month, while a junior engineer can cost $8,000–$10,000 per month or more when you factor in salary and overhead. A seasoned engineer using AI might achieve productivity that previously required a team – effectively doing the work of “ten juniors” (as some have quipped) at a fraction of the cost. When viewed purely through the lens of efficiency and cost, it’s hard to argue against the idea that some traditional junior tasks are getting automated away.

However, declaring the death of junior engineers is too simple of a narrative. For one, today’s juniors are tomorrow’s seniors. If we stopped hiring and training new engineers, the pipeline of talent and leadership would quickly dry up. Companies are aware of this; completely forgoing early-career hires is a short-term gain but a long-term risk. More realistically, the role of the junior engineer is evolving rather than disappearing. Instead of being hired to crank out trivial code (which AI might handle), new engineers might be expected to focus on higher-level contributions sooner: think integration, testing, creative problem solving, and coordination. In other words, the bar is rising. Juniors may need to bring something extra to the table – whether it’s a specific domain knowledge, exceptional debugging skills, design sensibilities, or simply a knack for using AI tools effectively to amplify their impact.

It’s also worth noting that there are limits to replacement. An AI can’t attend stand-ups and spontaneously report that a requirement doesn’t make sense from a user’s perspective. It won’t mentor the next intern or contribute to the company wiki with a tutorial on the internal deployment process. Humans in a team do more than just write code – they share knowledge, catch misunderstandings, and often bring soft skills that glue a project together. Junior engineers, in particular, often question assumptions (sometimes out of naivete) and that can be healthy for a team that has grown set in its ways. The perspective of a human newcomer can highlight issues an AI, optimized to follow instructions, would never raise.

From my vantage point as someone involved in hiring, I have indeed seen a shift. In take-home coding tests, it became apparent when candidates started using AI to generate answers. Instead of throwing up our hands, many of us adjusted the tests: we now design them to require a bit more end-to-end thinking or creativity, something not easily found in an LLM’s training data. We might ask for a design proposal along with code, or pose problems that span multiple domains (e.g. “simulate a hardware device and build an API for it” – something that forces understanding of both low-level and high-level concerns). The goal isn’t to catch juniors using AI (we assume they will, and even encourage it as a tool), but to see how they add their own insight beyond what AI provides. Can they critique the AI’s output and improve upon it? Do they understand the why, not just the how?

So will AI replace juniors? It will certainly replace some of the work juniors used to do, and it may reduce the number of entry-level positions in some areas. But I see it as a call to action for up-and-coming engineers: the easier it is for AI to do average work, the more you should strive to demonstrate above-average thinking. Rather than churning out another generic to-do app for your portfolio, contribute to an open-source project, or tackle a problem that required digging into a new field (maybe a bit of hardware, maybe a dash of machine learning, maybe a tricky algorithm that isn’t well-covered on Wikipedia). Show that you can synthesize ideas and learn in public – for example, write a blog post about how you solved a tough bug or improved performance in an unexpected way. These are signals that you’re not just typing code that any AI could generate, but bringing unique value through insight and initiative.

Finding Your Edge: Creativity, Cost, and Context

For a software engineer facing the AI era, the guiding question becomes: What can I do better with a human mind than an AI can do on autopilot? By now, a theme should be clear. The answers revolve around creativity, cost, and context – the very elements highlighted in this article’s title.

  • Creativity: Cultivate a problem-solving approach that isn’t just about writing code that works, but about envisioning solutions that others (including AI) wouldn’t think to try. This could mean drawing on analogies from other industries, questioning the problem framing, or experimenting with a new architecture. Creativity in engineering is a skill you build by staying curious and cross-pollinating ideas. Don’t be the coder who only knows how to follow recipes – be the one who occasionally invents a new recipe.
  • Cost-awareness: Always tie your technical decisions to the real world. This doesn’t mean you need an MBA, but it does mean thinking like a product manager or an architect. Why are we building this? Is there a simpler way? What’s the maintenance cost? If you develop a habit of considering the ROI of your choices, you’ll make yourself indispensable. Teams need people who prevent over-engineering and keep efforts aligned with value. AI can suggest ten ways to do something; your job is to pick the one that makes the most sense for the business and justify why.
  • Context mastery: This is about depth and breadth. Be the engineer who delves into understanding the domain you’re working in (whether it’s finance, healthcare, automotive, etc.), because that domain knowledge becomes context that informs better software. Also, pay attention to the ecosystem around your code: the devops, the legal implications, the user’s perspective, the data flows. The more context you carry, the more you can anticipate issues before they arise. AI is a quick study of documentation, but it has zero intuition – it won’t get a hunch that “something about this requirement feels off” or that a certain use case might break the design. Your awareness of context gives you that intuition.

I’ll share a personal anecdote that ties these elements together. Not long ago, I interviewed a candidate for a mid-level engineering role. On paper, she relatively was inexperienced, and I suspected she likely used AI assistance in the take-home exercise we had given (which was fine by us). What stood out was how she described her solution: she identified a subtle performance bottleneck in an open-source library our problem used, and she talked about how she tweaked the usage to lazy-load certain components, improving throughput by a significant percentage. This wasn’t in the prompt; it was a tangential discovery she made while testing. To me, it demonstrated creative problem-solving (she went beyond the obvious requirements), cost-awareness (she cared about performance and efficiency), and context understanding (she dove into the library’s behavior to see why the bottleneck happened). That was an immediate “hire” recommendation from my side, even though she had no prior industry job. Why? Because those are exactly the traits that AI won’t give you out-of-the-box but are incredibly valuable in an engineer.

Conclusion

AI is here to stay, and it’s changing our field much like electricity once changed manufacturing. As software engineers, we have a choice: wield the new tools wisely or be outpaced by those who do. Competing with AI doesn’t mean trying to race it on brute-force output; that’s a losing battle. Instead, it means augmenting our human strengths – creativity, judgment, contextual understanding – with AI’s speed and scale. It means embracing the irony that the cheat-code of yesterday is the standard equipment of today, and adjusting our game accordingly.

Being a valuable human software engineer in the era of fast, accurate, and scalable AI comes down to being a creative, cost-conscious, context-aware engineer. It’s about seeing the whole chessboard, not just the next move. AI will undoubtedly get better, and the ground will keep shifting. But if we focus on the timeless aspects of engineering – understanding problems deeply, crafting solutions that make sense in the real world, and continuously learning – we won’t just survive alongside AI; we’ll thrive, with AI as a powerful ally. In the end, the question isn’t human or AI, but rather how each amplifies the other. The engineers who figure that out will lead the way in this new age of software development.