A short trip back in time
Quite a few years back, we wrote code on terminals. If you’ve never seen one, imagine a green-on-black (or white-on-black) editor where you typed every line—no visual interface. You had to guess the UI while you built it—this is where it started for many veterans, including Flexyware. Our first computer was the ZX Spectrum 48KB with a cassette drive for backups.
Then came early Windows. Building even a basic interface meant a lot of code. Positioning a single button required you to track window vs. screen coordinates and redraw on resize. If you were a hard-core developer (guilty), there was no way you’d use a standard button—of course, you’d craft your own component. You’d learn exactly how a button paints, handles focus, debounces clicks… and yes, you’d feel smarter for it.
Did it add that much value to your life or your users? We’ll revisit that question in the follow-ups.
Then arrived the drag-and-drop era. Visual designers let you assemble a screen in minutes. Suddenly, the folks who could iterate UI fast outperformed the purists. They got the gigs because they could adapt interfaces on the fly in front of stakeholders. Speed and adaptability started to win.
The appeal (and limits) of doing everything yourself
Appeal:
- You understand the internals (like how a button truly works).
- You can optimise for that last 5% of performance or polish.
- You control every dependency—nothing “magic” hidden in a black box.
Limits:
- You spend time on plumbing instead of outcomes.
- UI changes are expensive; every tweak is code.
- Teams move more slowly, and the user sees value later.
When visual tooling matured, teams that adopted it could show progress every week. That changed how decisions were made and how projects were won.
Resisting the change
I remember how veterans—myself included—sometimes dismissed the drag-and-drop teams when a control they placed failed to repaint correctly. We’d respond with hand-crafted components and lectures about the “right” way. In hindsight, that posture missed the point. The priority should have been adopting tools that accelerated delivery and focusing on outcomes, not winning technical arguments. We’ll explore that balance—principled engineering versus pragmatic adoption—in later posts.
The “old way” didn’t stop at how we wrote software—it also shaped how we managed data and infrastructure. We debated on-site servers versus the emerging cloud, printed reports versus online views, and cycled through storage media as each new standard arrived. Backups moved from ad-hoc copies of our carefully crafted control code to disciplined version control in modern repositories. Ultimately, veterans like me had to adapt to these methods. Those who didn’t embrace the change largely moved out of software engineering.
Perhaps a real-life anecdote can spark memories for veterans—or give younger engineers a glimpse (and a chuckle) into past challenges. My first database application managed squash-court lighting. Users swiped a Code 39 barcode; the data—light activation time and scheduled deactivation were written to a text file. An 8031 microcontroller handled the switching, while an 8086 machine with “advanced” reporting software sent instructions over RS-232. This post focuses on that text-file “database,” which demanded a surprising amount of effort, especially when month-end reports were due.
Then came dBASE III and Clipper Summer ‘87—and everything changed about how we saved data. Out with flat text; in with DBF tables, indexes, and real queries. Not long after, Clarion arrived. It felt like our first brush with AI: you described the entities and screens you wanted, and within a minute, it scaffolded a working database with capture and query forms.
Our lives changed forever. What took days of custom parsing and reporting in text files now took minutes to generate and refine. And the best part? It worked. Users didn’t care how the data was stored—they cared that their monthly reports arrived on time. The real win was the speed of delivery.
Embracing the change
Over three decades, we’ve had to adapt—keeping pace with new developers who ship faster using the latest tools. The question is how we respond: do we cling to “we can build a better button” and defend old methods, or do we adopt the tools that let us deliver working solutions in days, not months? It’s a real tension because many developers see themselves as artisans—we love to create from first principles, not just assemble. The mature stance is to treat craftsmanship as a standard for quality, not a reason for delay: use modern tools to move quickly, and apply engineering principles where they matter most.
By now, every developer has heard about AI and “vibe coding”—tools that can accelerate our work or, depending on your vantage point, feel like a threat. Should we embrace them, and how? That’s exactly what we’ll unpack in the next post. Stay tuned.