The Evolution of Technology: From Computers to AI and the Unchanging Quest for Perfection

In the days before computers, creating a resume was a laborious task that required precision and patience. Using typewriters, each keystroke was permanent, and any mistake meant starting over or using correction fluid. The advent of computers promised a revolution, touting saved time and effortless editing. However, what transpired was not a reduction in effort but a shift in focus. Instead of merely producing a document, we invested time in perfecting it, and raising the standards of what a resume could and should be. Today, we stand at the cusp of another technological revolution with artificial intelligence (AI). Similar to the rise of computers, AI promises efficiency and ease. But will it truly save us time, or will we find ourselves once again raising the bar?

The rise of AI, much like the rise of computers, will not necessarily lead to less time spent on tasks but to a perpetual enhancement of quality and expectations.

Disclaimer: I used ChatGPT to assist with the set up of this article. prompt: Back in the days before computers when we created our resume, we did it on typwriters. Then computers came along. We were told how much time we would save because the computer could save the information, and we would not need to spend time correcting the typewriter version. What ended up happening was that the standard was raised. We still spent the same amount of time one the resume, but the time went into making it pretty. In my article, I would like to compare the rise of AI with the Rise of the computer. We will not be saving time because instead of getting time we will be investing into raising the standard of our submissions. I am not sure how to create the comparison. Can you please come up with something to help me get going Photo by Ikowh Babayev: https://www.pexels.com/photo/man-wearing-red-crew-neck-sweater-16622/

The Era of Typewriters

Creating a resume on a typewriter was an exercise in meticulousness. Each letter had to be struck with care, knowing that any mistake could mean starting from scratch. Corrections were cumbersome, often involving white-out tape or correction fluid, which left unsightly marks. Formatting was a challenge, requiring manual alignment and spacing. The time investment was significant, but the result was a testament to one’s diligence and attention to detail.

The Advent of Computers

With the introduction of computers and word processors, the landscape of resume creation changed dramatically. The ability to save documents, make instant corrections, and experiment with formatting without fear of permanent error was a game-changer. Initially, it seemed that we would save countless hours. However, this newfound flexibility led to higher expectations. Resumes were no longer just about content; they became visual representations of professionalism. Time saved on corrections was now spent on choosing fonts, layouts, and even incorporating graphics. The standard had been raised, and the time invested remained substantial.

The Rise of AI

Artificial intelligence is now permeating various aspects of our lives, from smart assistants to advanced data analysis. In the realm of content creation and design, AI tools promise unprecedented efficiency and creativity. Platforms powered by AI can draft text, suggest improvements, design layouts, and optimize content for specific audiences. The allure is clear: faster production with enhanced quality. But, just as with computers, these advancements come with heightened expectations. An AI-generated resume isn’t just about listing qualifications; it’s about presenting them in a way that is both visually appealing and algorithmically optimized.

Comparison of AI and Computers

The promises made by AI today echo those made by computers decades ago. Both technologies are hailed for their potential to save time and increase productivity. However, in both cases, the reality is that the time saved on basic tasks is redirected towards meeting higher standards. For instance, while AI can draft a resume quickly, the focus shifts to refining and personalizing the output to stand out in an increasingly competitive job market. The cycle of technological advancement continues, with each innovation setting a new benchmark for excellence.

The Reality of Time Investment

Despite the promises of time-saving technology, the reality often involves reallocating that time towards enhancing quality. Consider the example of a graphic designer who now uses AI tools. While the initial design process may be faster, the designer now spends additional time fine-tuning the AI-generated elements to align with their vision. Similarly, job seekers use AI to draft resumes but invest time in customizing and perfecting the content to meet the higher standards set by employers.

Conclusion

In summary, the rise of AI mirrors the rise of computers in many ways. Both technologies promised to save time, yet both resulted in raised standards and continued time investment. As we embrace AI and its capabilities, it’s essential to recognize that the pursuit of excellence often means investing time in new ways. While technology evolves, the drive to meet and exceed expectations remains constant. In the end, it’s not just about saving time; it’s about using it wisely to create something better.

Navigating Software Review: Enforcing Standards and Overcoming Resistance

In the domain of software development, the review process acts as a crucial checkpoint for ensuring code quality, readability, and maintainability. Recently, I found myself tasked with scrutinizing the Python codebase of a fledgling software development team, a journey fraught with complexities in enforcing coding standards and battling resistance while striving for quality assurance amidst mounting project pressure.

Disclaimer: I used ChatGPT to assist with the set up of this article. Starting with some ideas that I had to try to figure out how to do better code reviews.

Photo by Vlada Karpovich: https://www.pexels.com/photo/stressed-woman-between-her-colleagues-7433871/

My background in software development traces back to the era of Emacs terminals, where coding standards were meticulously ingrained in our workflow, essential for maintaining code quality and readability. With the emergence of the Agile movement, luminaries like Uncle Bob Martin (Clean Code) and Martin Fowler (Software Patterns) emphasized clean code and standardization, driven by the painful lessons learned from poorly maintained codebases – a reality that resonates deeply with my own experiences.

Upon immersing myself in this project, it became apparent that chaos reigned supreme. Several sprints in, and concerns about sub-par code quality were surfacing. The absence of established coding standards was glaring, prompting me to initiate a framework to instill order amidst the chaos.

Implementing PEP8, static analysis via prospector, code duplication checks with jscpd, and security screening using bandit formed the pillars of our coding standard framework. Despite garnering initial buy-in from the team through presentations outlining the benefits of these tools, practical implementation proved challenging. Despite investing significant time and effort into documentation, shell scripts, and pre-commit hooks, the tools remained largely unused.

The resistance encountered can be attributed to what I term the “hoarder’s mentality” of code accumulation. With no foundational standards in place from project inception, the code base became a repository of disorganized code. Coupled with inadequate test coverage, the prospect of refactoring became fraught with risk, compounded by the relentless pressure to meet deadlines. The developers prioritized expedient solutions over long-term code quality. Effecting change was “too hard for now” and pushed aside till later.

In confronting this challenge, it’s crucial to recognize the broader implications of neglecting coding standards and quality assurance protocols from the beginning. Beyond the immediate hurdles of managing unruly codebases, the long-term consequences include increased technical debt, diminished maintainability, and compromised scalability.

When commencing a project it is really important to answer the following questions to create the “Constitution” of the project. Once created then all development must adhere to this.

  • What repo are we using? 
  • Git process (Feature-based branching?, GitFlow?).
  • What is the CI/CD process? ((Bitbucket has pipelines, but can be expensive. Terraform?) 
  • What are the standards, and how to enforce them? (Prefer automation, and static analysis to subjective reviews) 
  • How to do testing.
  • How to write unit tests. (Unit testing can be challenging in cloud-based projects)
  • What IDEs are we using?
  • Implementing pre-commit hooks.
  • Standards Libraries for REST, DB Access, etc.
  • How to write re-usability and libraries.
  • Logging
  • Notification when a “human is needed”
  • REST and API Standards – What does a response look like? (Best Practices for API Design)
  • Third-party tools? AI, Postman, Cloud, etc. Be careful because some of the third-party tools get expensive quickly.
  • What level of documentation is needed for the project?
  • What happens when you don’t follow the above?
  • Process for amending the project constitution. Can developers submit changes to static analysis config that accept a coding form that otherwise would be rejected? Add to Standard Libraries. How to do this quickly so that the project does not get bogged down in change control.

If these processes are not in the project from the beginning then it takes huge effort for the team to implement. Especially if the team is trying to meet delivery goals, you will never get the quality reviews part of the day-to-day process.

How to implement this after the project has started?

There is no easy solution to this. You can implement small changes. Maybe you can try to “bring in an expert” to help with reviews. More than likely, that person will be ignored as there are pressures to deliver which outweigh the word of the “expert”. (The words passive-aggressive, and lip-service come to mind) Since the team has momentum, it will be hard to divert them no matter how pure your intentions.

  • It must be directive from above, as the momentum for shipping (subpar) solutions must be halted.
  • The development of the project will be halted for at least one sprint.
  • It is expected that the implemented solution will break.

Most of the time the above is not acceptable. So the only thing that we can do is Boy Scout rule, and try to leave the code a little better than we found it and iteratively the code will improve.

Streamlining Testing in Google Cloud Platform with Custom Cloud Functions, API Endpoints, and Postman

Testing is a crucial aspect of software development, ensuring that your applications perform reliably and meet the required standards. In this article, we’ll explore an efficient testing approach in Google Cloud Platform (GCP) using custom cloud functions, API endpoints, and Postman. This method not only enhances the codebase but also facilitates API testing and encourages better documentation practices.

Disclaimer: This article was written by ChatGPT with some minor edits, based on some ideas that I had to try to figure out how to do efficient unit testing in GCP while working on projects with AddAxis.ai.

Photo by ThisIsEngineering: https://www.pexels.com/photo/female-engineer-controlling-flight-simulator-3862132/

The Problem: Isolated Testing and Documentation Challenges

Developers often encounter challenges when it comes to testing and documentation in a cloud-based environment. Isolated testing of cloud functions, especially when they depend on external services like GCP APIs, and Data Sources in production, can be complex and time-consuming. Additionally, maintaining consistent documentation can become a tedious task

The Solution: Custom Cloud Functions

To address these challenges, we propose the use of custom cloud functions in GCP. These functions are specially designed for testing purposes and contain setup and tear-down code, making isolated testing more efficient. Then the Testing can be moved to API testing with Postman. Additionally Postman is a good repository for documenting the API.

Benefits of Custom Cloud Functions:

  1. Isolated Testing: Custom cloud functions allow developers to test specific parts of their codebase in isolation. This reduces the risk of unintended side effects during testing.
  2. Enhanced Code Reusability: Developers can write functions within these custom cloud functions that are reusable across different parts of the application. This promotes cleaner and more maintainable code.
  3. Error Diagnosis: Custom cloud functions living alongside the code provide a valuable resource for diagnosing errors. Developers can tap into these test APIs to identify issues quickly, even when someone else modifies functions.

The Process

Here’s a step-by-step guide on how to implement this testing approach:

1. Create Custom Cloud Functions

  • Identify areas of your codebase that require testing isolation.
  • Write custom cloud functions for these areas, focusing on setup, execution, and tear-down code.
  • Ensure that these functions are well-documented, providing clear instructions for other developers.

2. Integrate API Endpoints

  • To test your cloud functions, expose them as API endpoints using GCP services like Cloud Functions or App Engine.
  • Ensure that the API endpoints are secure and follow best practices for authentication and authorization.

3. Testing in Postman

  • Use Postman to create test collections that interact with your API endpoints.
  • Write test scripts in Postman to automate testing scenarios.
  • Postman’s testing platform allows you to run a wide range of tests, from simple unit tests to complex integration tests.

4. Documentation in Postman

  • Leverage Postman’s documentation feature to create detailed documentation for your API endpoints.
  • Postman generates documentation automatically, making it easy for developers to understand how to interact with your API.

5. Continuous Integration

  • Integrate your custom cloud functions, API endpoints, and Postman tests into your CI/CD pipeline (Newman helps).
  • Automate the testing process to ensure that tests are run consistently with each code change.

Benefits

Implementing this testing approach offers several advantages:

  • Efficiency: Custom cloud functions make testing more efficient, saving time and effort for developers.
  • Code Quality: Improved code reusability leads to cleaner and more maintainable code.
  • Documentation: Postman’s documentation feature ensures that your API is well-documented and accessible to all team members.
  • Error Diagnosis: Easy access to test APIs simplifies error diagnosis and troubleshooting.

Caveats

  • Postman gets expensive as you collaborate with more people. Other tools allow you to import Postman collections and are cheaper.

Conclusion

Incorporating custom cloud functions, API endpoints, and Postman into your GCP development workflow can significantly enhance your testing process and documentation efforts. By isolating testing, promoting code reusability, and leveraging Postman’s powerful testing and documentation capabilities, your Python development team can streamline their development workflow and produce more reliable and well-documented applications in the cloud.

Finding Happiness in Your Job: Key Factors to Consider

Finding happiness in your job is not an elusive dream; it’s a reality that can be achieved with the right approach and mindset. While everyone’s definition of happiness in the workplace may vary, there are several common factors that contribute to job satisfaction. In this article, we will explore some key elements that can help you understand what it takes to be happy in your job.

Disclaimer: This article was written by ChatGPT based on some ideas that I had to try to figure out what makes me happy at work. By identify what has pissed me off inthe past I came up with this list.

Photo by energepic.com: https://www.pexels.com/photo/woman-sitting-in-front-of-macbook-313690/
  1. Trust and Mutual Respect: One of the fundamental pillars of job satisfaction is a strong foundation of trust and mutual respect between you and your boss. When you have a manager you can trust, and who trusts you in return, it creates a positive and supportive work environment. Open communication, transparency, and a sense of collaboration are essential aspects of building this trust. Knowing that your boss has confidence in your abilities can boost your confidence and overall job satisfaction.
  2. Access to Resources: To excel in your role and be content with your job, having access to necessary resources is crucial. This includes having a reasonable budget to negotiate for the tools, training, and resources you need to perform your job effectively. When you have the right resources at your disposal, you can accomplish tasks more efficiently and feel a greater sense of empowerment in your role.
  3. Belief in the Company’s Direction: Feeling aligned with the company’s mission and vision is vital for job satisfaction. When you believe in the direction of the product or service your company offers, you’re more likely to be motivated and passionate about your work. This sense of purpose can significantly impact your overall happiness at work and make you feel like you are contributing to something meaningful.
  4. Fair Compensation: While job satisfaction is about more than just money, fair compensation is undeniably important. Feeling undervalued or underpaid can lead to frustration and job dissatisfaction. It’s essential to be paid a competitive salary that reflects your skills, experience, and industry standards. A fair compensation package acknowledges your worth and contributions to the organization, contributing to your overall job happiness.
  5. Clarity on Success Criteria: Understanding what success looks like in your role is crucial for job satisfaction. It’s important to have clear, measurable goals and performance expectations. When you know what is expected of you and how your contributions are evaluated, you can focus your efforts more effectively. This clarity not only helps you excel but also gives you a sense of achievement and satisfaction when you meet or exceed those expectations.

Conclusion

Happiness in your job is attainable when certain key factors are in place. Building trust and mutual respect with your boss, having access to necessary resources, believing in your company’s direction, receiving fair compensation, and understanding your success criteria are all essential elements that contribute to job satisfaction.

Remember that finding happiness in your job is a continuous journey. Regularly assessing and reassessing these factors and making adjustments as needed can help you maintain a fulfilling and enjoyable career. By prioritizing these elements, you can create a work environment that not only meets your professional needs but also brings happiness and fulfillment into your daily work life.

The Essential Shopping List for Software Startups: Tools for Efficiency and Compliance

Starting a new software venture is an exhilarating journey filled with innovation and potential. However, as your startup grows, you’ll need to ensure that your software development processes adhere to industry standards and compliance regulations like ISO 27001 and PCI-DSS. To help you navigate this transition seamlessly, we’ve curated a shopping list of essential tools that will not only enhance your software development but also assist in maintaining compliance and security. Let’s dive into the must-have tools for your software startup.

Disclaimer: This article was partially written by ChatGPT based on some ideas that I had to recommend a list of tools that are useful for software development startups. This is not sponsored. I have used all of these tools very sucessfully.

Photo by Anastasia Shuraeva: https://www.pexels.com/photo/person-crouching-near-a-set-of-hand-tools-and-wrenches-on-ground-9607267/
  1. Atlassian Confluence: Atlassian Confluence is a versatile documentation and collaboration tool that every startup should have in its arsenal. It provides a centralized platform for documenting processes, policies, and project details. Confluence also offers robust version control, ensuring that your documents are always up to date. Whether you’re preparing for an audit or simply need a repository for critical information, Confluence has got you covered.
  2. Google Mail: Email management is a fundamental aspect of any business, and Google Mail (Gmail) is an excellent choice for startups. It offers a user-friendly interface, powerful spam filters, and ample storage space. Gmail also integrates seamlessly with other Google Workspace applications, making it easy to manage communications and store essential documents securely.
  3. Google Docs: While Confluence is great for internal documentation, some documents are better suited for Google Docs. For instance, sensitive legal documents or customer contracts can be securely stored and collaboratively edited in Google Docs. This separation of documentation ensures a clear distinction between internal and external materials, essential for compliance.
  4. Atlassian Jira: Jira is a comprehensive project management tool designed for software development teams. It enables you to manage tasks, track progress, and prioritize work efficiently. Whether you’re overseeing a simple task list or a complex software project, Jira’s flexibility allows you to adapt your workflow as your startup evolves.
  5. LastPass or Bitwarden (password manager): Security is paramount in the digital age, and LastPass/BitWarden is your go-to password management solution. It simplifies password management by securely storing and auto-filling passwords, making it easier for your team to use strong, unique passwords. While free for individuals, LastPass and Bitwarden offer reasonably priced plans for families and enterprises, ensuring scalability as your startup grows.
  6. Atlassian Bitbucket: Bitbucket is an ideal platform for software development, offering version control, automated software deployment through pipelines, and change management capabilities. Bitbucket’s integration with Jira ensures seamless communication between development and project management teams, enhancing collaboration and productivity. Plus, Bitbucket is free for small teams, making it a cost-effective choice for startups.
  7. UptimeRobot: The world’s leading uptime monitoring service. External monitoring of your websites. This product is free for the basic product and easy to create a status page.
  8. Nagios: This tool is really good for internal monitoring. You can run this up on your own servers and monitor jobs, disk space, etc. Use this for all your intense system monitoring, and UptimeRobot for external monitor. A combination of the 2 will have your system covered completely. Building Nagios hooks into your software is easy. See here
  9. PostMark: If you are sending email campaigns then use this tool for sending email. It is very affordable. Provides webhooks so that you can get instant notifications of bad emails.
  10. ZeroBounce: Use PostMark in conjunction with this tool to ensure that your email lists are clean. Also, validate that your customers have given you the correct email.
  11. Software development: This is a little beyond the scope of this article as there are many different sets of development tools based on needs, and operating systems. But these tools get honorable mention. IDEs, Database Clients, Version Control. This is what I am using in my day-to-day programming.
    • PHP Coding IDE: Apache Netbeans – I have been using netbeans since version 5 and although it is considered a Java IDE it works very well for PHP development.
    • GoLang Coding IDE: VS Code does a great job for Go.
    • Python Coding IDE: PyCharm Community Edition
    • API Development and testing: Postman. Postman goes up in price quite steeply when you start to collaborate. So please look into other cheaper options if you want to collaborate. – testfully.io, apidog.com
    • MySQL/MariaDB Client: Sequel Pro. Even though this project has not been updated for a while, the quality and usefulness of this tool cannot be understated. Heidi SQL on Windows
    • Other DB Clients: DBeaver Community Edition. Handles every flavor of database including cloud-based connections.
    • GIT for version control.
    • FileZilla for file transfer (Always use in secure mode SFTP).
    • iTerm2 for terminal.

Price and Ease of Upgrade: One of the most appealing aspects of these tools is their scalability. As your startup expands, you can easily upgrade to accommodate growing teams and needs. Atlassian, for example, offers tiered pricing options, ensuring affordability during your startup’s initial stages and allowing you to scale effortlessly when required.

Conclusion

Building a successful software startup is an exciting endeavor, but it comes with the responsibility of compliance and security. The tools in this shopping list – Confluence, Google Mail, Google Docs, Jira, LastPass, and Bitbucket – will empower your startup to maintain documentation, streamline development processes, and ensure security and compliance. With these trusted tools by your side, your startup is well-equipped to thrive and grow while meeting industry standards and regulations.

PS: This is not sponsored. I have personally found these tools to be useful for real projects.

Applying the Laws of Physics to Software Development: A Fascinating Connection

Software development, at first glance, may seem like an entirely different realm from the laws of physics. After all, it’s about lines of code and digital interfaces, while physics deals with the natural world. However, upon closer inspection, one can find intriguing parallels between the two seemingly disparate fields. In this article, we will explore how certain laws of physics can be applied to software development and shed light on how they can help us better understand the dynamics of this ever-evolving industry.

Disclaimer: This article was written by ChatGPT based on some ideas that I had while thinking about the parallels between software development and the laws of physics, and chaos theory. I am amazed at how well that ChatGPT was able to capture and reflect my ideas as a coherent article.

Photo by Merlin Lightpainting: https://www.pexels.com/photo/man-with-red-and-blue-light-11308989/
  1. The Law of Stall Speed and Software Development:

Just as an airplane has a stall speed, which is the minimum speed at which it can maintain level flight, software development faces a similar challenge. The “stall speed” in software development can be seen as the point at which a project risks failure due to insufficient resources, unrealistic expectations, or misaligned strategies. When developers push too hard against market trends or operate with limited resources, they risk “stalling” their software development efforts. To avoid this, it is essential to maintain a balance and align development efforts with market demands and available resources.

  1. Reduced Engine Size and Code Efficiency:

In aviation, reducing engine size can improve fuel efficiency and performance. Similarly, in software development, reducing code size and complexity can lead to more efficient and maintainable software. Bloated codebases can slow down development, introduce bugs, and make it challenging to adapt to changing requirements. Therefore, software engineers must strive for code efficiency and simplicity to ensure smoother development and long-term viability.

  1. Angle of Attack and Adapting to Change:

The concept of the “angle of attack” in aviation refers to the angle between the wing’s chord line and the oncoming airflow. It is critical for controlling the aircraft’s lift and stability. In software development, the angle of attack can be equated to the adaptability of a project. Software projects that can adjust their “angle of attack” by quickly responding to changing market conditions or user feedback are more likely to succeed. Staying agile and open to adjustments is essential to navigate the ever-evolving landscape of software development successfully.

  1. Entropy and Software Evolution:

Entropy is a fundamental concept in physics that describes the tendency of systems to move towards greater disorder or chaos unless energy is applied to maintain order. In software development, we observe a similar phenomenon. As software evolves and grows, without continuous maintenance and refactoring, it tends to become more chaotic and less efficient. This accumulation of “software entropy” can slow down development and increase the risk of software failure. To combat this, developers must invest in ongoing maintenance, updates, and optimization to keep their software in a state of order and efficiency.

  1. Metabolism and Software Development Velocity:

Metabolism in the natural universe can be likened to the “velocity” of software development. Just as organisms with higher metabolism tend to have shorter lifespans, software projects with rapid development cycles may have shorter lifespans too. While a high development velocity can be advantageous for quickly delivering features and updates, it must be balanced with quality assurance and sustainability to ensure the longevity of the software.

Conclusion

The application of laws of physics to software development may seem like an unconventional comparison, but it provides valuable insights into the dynamics of this industry. By recognizing the parallels between stall speed, reduced engine size, angle of attack, entropy, and metabolism in both fields, software developers can make more informed decisions, improve code quality, and increase the chances of long-term software success. Embracing these principles can help the software development community navigate the ever-changing landscape of technology with greater efficiency and effectiveness.

The Pitfalls of Digital Transformation: Why Projects Tend to Fail

In today’s rapidly evolving business landscape, digital transformation has become more than just a buzzword—it’s a necessity. Companies recognize the need to embrace technology to stay competitive, streamline processes, and enhance efficiency. However, despite the promises and investments associated with digital transformation (DT) projects, many of them fall short of delivering the expected outcomes. In this article, we will explore some common reasons why digital transformation projects tend to fail.

Disclaimer: This article was written by ChatGPT based on some ideas that I had to try to show the importance of planning for any digital transformation project.

Photo by Pixabay: https://www.pexels.com/photo/low-angle-view-of-lighting-equipment-on-shelf-257904/
  1. Legacy Systems and Data Loss:

One of the most significant hurdles in the path of successful digital transformation is the presence of legacy systems. These systems often hold vital business rules and data that have accumulated over the years. When transitioning to new digital solutions, integrating and migrating data from legacy systems can be a complex and error-prone process. Data may be lost or compromised during migration, leading to inefficiencies and disruptions in operations.

  1. Lack of User Involvement:

Digital transformation projects are not solely about implementing new technology; they also involve a cultural shift within the organization. One critical mistake is failing to engage with the end-users of the system adequately. When decisions are made in isolation from those who will use the new systems daily, it can lead to resistance, confusion, and ultimately, project failure. Ensuring that employees’ perspectives are considered and addressing their concerns is vital for the successful adoption of digital tools.

  1. Executive Overreach:

While executive buy-in is crucial for digital transformation, it can sometimes be a double-edged sword. Executives who are removed from the day-to-day operations may make suggestions or decisions that seem ideal from their perspective but are impractical or unnecessary for the actual users. These well-intentioned but misaligned directives can lead to additional costs, delays, and frustration among employees.

  1. Vendor-Centric Solutions:

Digital transformation projects often involve working with technology vendors who promise solutions that can solve all problems. However, these vendors may lack a deep understanding of the specific business and industry drivers that influence the existing systems. As a result, the software provided may not align with the unique needs and intricacies of the organization. This misalignment can lead to the implementation of ineffective or inefficient solutions.

  1. Insufficient Change Management:

Successful digital transformation is not just about implementing new technology—it’s about managing change effectively. Failing to invest in robust change management processes can hinder user adoption and hinder the project’s success. Employees need guidance, training, and ongoing support to adapt to new systems and workflows.

  1. Overly Ambitious Timelines:

Digital transformation is a complex endeavor that requires careful planning and execution. Rushing through the process with overly ambitious timelines can lead to shortcuts, overlooked details, and inadequate testing. This can result in technical glitches, data errors, and operational disruptions, ultimately undermining the project’s success.

Conslusion

Digital transformation projects are essential for businesses to remain competitive and efficient in the modern world. However, they are not without their challenges. Legacy systems, lack of user involvement, executive overreach, vendor-centric solutions, insufficient change management, and overly ambitious timelines can all contribute to the failure of such projects. To increase the chances of success, organizations must approach digital transformation with a strategic, user-centric, and well-planned mindset, taking into account the unique needs and nuances of their operations. By addressing these common pitfalls, companies can navigate the path to digital transformation more effectively and achieve their desired outcomes.

The Critical Imperative: Why Company Executives Must Know Where Data Resides

In today’s digital age, data is the lifeblood of any business. From customer information to financial records, data plays a pivotal role in driving decision-making, enhancing customer experiences, and ensuring operational efficiency. However, with the increasing volume and complexity of data, it has become imperative for company executives to answer a set of fundamental questions: Do you know where your data is stored? Do you know how it is stored? Do you know who has access to your data? Do you know what data is being captured in the name of your business? In this article, we delve into the critical importance of executives understanding the where, how, and who of company data.

Disclaimer: This article was written by ChatGPT based on some ideas that I had to try to get managers looking at their company data and to ensure that they understand the importance.

Photo by Manuel Geissinger: https://www.pexels.com/photo/black-server-racks-on-a-room-325229/
  1. Protecting Sensitive Information

One of the primary reasons executives need to know where data is stored is to protect sensitive information. Whether it’s customer data, proprietary business strategies, or employee records, mishandling or unauthorized access to data can lead to data breaches, lawsuits, and significant financial losses. Knowing where data resides helps executives implement robust security measures, ensuring that the data is adequately protected from potential threats.

  1. Compliance and Legal Obligations

Data privacy and compliance regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), have become increasingly stringent. Non-compliance with these regulations can result in hefty fines and damage to a company’s reputation. Executives who are aware of where data is stored can ensure that their organization complies with these regulations, avoiding costly legal consequences.

  1. Efficient Data Management

Efficient data management is essential for making informed decisions, optimizing operations, and improving overall business performance. When executives know where data is stored, they can implement effective data management strategies, including data consolidation, data cleansing, and data analytics, leading to more efficient processes and better-informed decision-making.

  1. Identifying Data Redundancy

Companies often collect and store the same data in multiple locations, leading to data redundancy and increased storage costs. Executives who understand where data resides can identify redundancy issues and implement data consolidation efforts, reducing storage costs and streamlining data access.

  1. Managing Data Access

Data access control is crucial to preventing data breaches and unauthorized use of sensitive information. When executives know who has access to company data, they can implement strict access controls, ensuring that only authorized personnel can view, modify, or delete data. This helps protect the organization from both internal and external threats.

  1. Enhancing Data Governance

Data governance refers to the overall management of data quality, integrity, and security. Executives who know what data is being captured in the name of their business can establish comprehensive data governance policies and practices. This ensures that data is accurate, reliable, and aligned with business objectives.

  1. Enabling Informed Decision-Making

Data-driven decision-making is a key driver of success in today’s competitive landscape. Executives who are well-informed about the location and nature of their data can harness it to make informed decisions, identify trends, and seize opportunities for growth.

Conclusion

In an era where data is king, company executives cannot afford to remain in the dark about where their data is stored, how it is managed, who has access to it, and what data is being collected. Failing to address these questions can lead to security breaches, legal troubles, inefficiencies, and missed opportunities. By understanding and taking control of their company’s data landscape, executives can not only safeguard their business but also unlock the full potential of data as a strategic asset. In today’s data-centric world, the mantra is clear: Know your data, and you’ll know the path to success.

Yet Another WordPress Plugin Template

Yet Another WordPress Plugin Template (ya-wordpress-plugin-template) or (YAWPT)

I used to hate WordPress but I have found a way that I can deliver value through WordPress Plugin ShortCodes.

I have scoured the internet for all of the tidbits of information and put them together in such a way that I
can stamp out plugins quite quickly. I use this to deliver value to my clients without having to be responsible for the whole site, content, and look and feel.

The usual conversation goes like this:

  • Client: I would like a small website and it only has to do this (Insert Feature List)…
  • Me: Let me stop you there. When you say small, does that mean that the budget is small?
  • Client: Well yeah.
  • Me: Are you guys comfortable with WordPress?
  • Client: Yeah, that is what we have now.
  • Me: Well I can give you a WordPress Plugin that exposes a shortcode. Then I can connect
    through API to your primary system to deliver (Insert Feature List). Then your web developer
    can style the page however you like and just put the shortcode in the page where you want it.
  • Client: Sounds great. What about the look and feel of what you develop?
  • Me: I will create the initial templates, and there is a template editor built into the plugin
    so your web developer can style the templates as well. It will be fast to develop, and I will provide
    a test environment for you to play before I install it on your site.
  • Client: Wow, Great. What a great programmer you are! (ok, I just added the last part for my ego)

As you know the time suck with any project is the UI/UX and the styling. If you can deliver a short-code
then you can drop as much as 70% of this from the budget. (This is a guess, but UI takes time)

Delivery on WordPress is easier IT as well. Sometimes I would have to provide the IT, Servers, VM, or
whatever the customer needs, but there are many vendors providing WordPress hosting at
a reasonable price with backup options as well.

By the way, I do not claim that this is the best; it might even be the worst, but it works well for me.
I am happy to get your input suggestions feedback etc.

The code is on GitHub here

Contact me if you need help. My email is somewhere in the code.

Demo

Click here to see the demo in action

GCP, Terraform, CD & Bitbucket

I have recently been working with a company, let’s call them AA regularly delivers applications that deliver an intelligent interpretation of data using the Google Cloud Platform. AA Uses APIs to read data that gets fed into pipelines, the data is then interpreted by Machine Learning and Language Analysis before it is delivered to data storage, and then displayed using dashboards that read the data source. AA utilizes the skills of many developers. The solutions are developed using “the language that makes sense”, so it is not uncommon for solutions to have some Python, and some Golang. Databases will be a combination of NoSQL, MySQL, and BigQuery. There are queues that are used to flatten out resource usage. AA regularly deploys to many different environments.

How do we manage deployments?

There are so many disparate technologies that have to come together to form a solution. There are naming conventions that must be adhered to. There are potentially many programming languages in a single solution. There is a list of APIs that need to be enabled as long as my arm. The solution needs to have a database instance spun up. There will be schemes that need to be corrected. Any small error in the connecting technologies will result in the failure of a part of the system.

Terraform to the rescue?

There has to be a better way! Terraform is a programming language/system that delivers the whole system configuration as code. No need for the entry of commands through the command line. No need for the click of options through the web console. Everything can be done within the Terraform programming language. Terraform keeps the current state of the machine that it is interacting with a state file and only applies the programming configuration if the system needs it. It determines if there is a need by checking if there is a difference between the configuration code and the machine state.

Now the problems start. Terraform is not pure. Only some of the Google Cloud Functions operations are available in the terraform libraries. and the libraries sometimes have some “shortcomings”. So now we start to get into problems because we need some scripts run in conjunction with Terraform. We have to be careful with changing the state somewhere where Terraform will not be able to resolve it automatically. I’ll get back to this because the next problem needs to be introduced.

Multiple Instances

Terraform works great if we program it for just one instance. But we are deploying this to multiple Google Cloud Platform environments. We have a Dev instance (that gets polluted by developer testing), Staging instances, and Production. The production instances are of such high security that even the developers do not have access to them. So how do we do DevOps when developers cannot see the environment that they are deploying to?

The development team needs to write multiple Terraform scripts for all of the different environments. And have such high confidence in the deployment that it works sight unseen and with no opportunity to correct errors. How can we achieve this?

Hello Bitbucket Pipelines

With a good branching strategy and the use of Bitbucket Pipelines, we get the level of automation that we need for deployment. And because of the ability of Bitbucket to have protected environment variables, the pipelines can be set up to deploy securely.

Now we can have Developer Branches and Deployment branches. Developers can check into the development branch, the pipeline will run automatically to deploy and redeploy to the developers’ test environment. So far so good. But what about the duplication of the code for the different environments? How do you create and maintain multiple terraform scripts for deploying to dev, test, and production environments? And what happens when you want to deliver to a second prod environment, a third, etc?

Selectable Configuration Variables

Terraform allows for the configuration to be passed in as JSON variables. In fact, most of Terraform can be driven by variables. In most programming languages, concepts can be abstracted. This is done with functions in structural, and methods in object-oriented programming. Terraform has a similar sort of abstraction. The code below calls a sub-terraform script and passes in the variables..

<code>module "pubsub_topic_tweets" {
    source = "./google_pubsub_topic"
    pubsub_topic_name = var.pubsub_topic_tweets
}</code>

In the code above you can see that this calls a standard Terraform script google_pubsub_topic to set up a PubSub Queue. This style of Terraform programming really makes use of code blocks, reduces cut-and-paste errors, and standardizes the way elements are set up. This last sentence is very important. The use of these code blocks::

  • Reuses code
  • Reduces cut-and-paste errors
  • Standardizes the way that GCP elements are creates

We had a really smart developer come up with this, and my jaw dropped. Machine setup code that looks like a programming language function.

Now that we have a variable-driven approach we need to set up and select the variables.

The Environment Name decides the way

In our first trial of setting this up, we use “Dev” and “Test” for the names of the pipelines in the Bitbucket. This was a complete mess because we were constantly trying to map the google cloud platform name to the different pipelines. Then we decided that since the Google Cloud Platform was immutable we would name our pipelines after the environment. Then when we deployed to a new environment it would be a simple matter of following the existing pattern.

Everything then fell into place. The documentation was reduced. Adding to the environments and pipelines became intuitive. Developers that had not worked on the project were able to see the pattern quickly and then implement a new platform.

You can see here that the naming convention is the same as the name of the Google Cloud environment. So we were excited, we were pumped. What could we do next to reduce copy-paste? How could we improve our code further?

Terraform uses a variable.tf file to “declare” all the variables that would be used with the creation of the system. Our first attempt was to declare all the variables and then create a massive config.tfvars.json to set them up.

This would set up the name of the PubSubs, and the name of the cloud functions. So we had a massive JSON config and due to the nature of JSON (no variable substitution), we again had massive duplication of code. The answer came from a less experienced Terraform developer who could not accept duplication.

“Default” is little known but so useful

Terraform has a file variable.tf that allows you to declare all the variables that will be used. It has a little-known feature (ok, maybe it is well known, but we did not know about it) called Default. This looks like the following:

<code># Pubsub Topics
variable "pubsub_topic_searches" {
    type    = string
    default = "searches20"
}
variable "pubsub_topic_tweets" {
    type = string
    default = "tweets20"
}</code>

Now we can set up all the common elements that are the same between all deployments declared in our variables.tf as a default. Then we can remove 90% of our JSON and just keep the stuff that is different. What a refactor. It made the author so happy I did the programmer “happy dance” (in the privacy of my own office of course).

Terraform “Undocumented Features”

The Pipeline feature of Bitbucket is very powerful. It basically sets up a Docker machine and runs scripts. This meant that we could code for different situations, and handle some of the Terraform shortcomings since we get a chicken-and-egg situation with the Terraform state file. We were able to use GCP utility functions to check to see if the storage bucket existed. If it didn’t exist we knew that it was the first run and could set some environment variables accordingly. In the script to we could set up the APIs (something that is not done well in Terraform) so we were able to utilise the strengths of Terraform, and the strengths of bash scripting

Below is an example of the script that we use. Here you can see that we are using gsutil to get the storage bucket state and passing environment variables in on the Terraform Command line.

Within the scripts, we can use the gcloud command-line call to set up all the APIs that we need. This is all possible because we can download the Google Cloud Platform tools and install them in the Docker instance.

<code>echo "Starting…"
gcloud services enable appengine.googleapis.com
gcloud services enable bigquery.googleapis.com
gcloud services enable cloudbuild.googleapis.com</code>

Putting it all together

Putting this all together, we have the best of ALL worlds. Best automatic and manual deployment. Best of Terraform for automated machine deployment, and Bitbucket Pipelines for source control with DevOps. We have maintained an amazing level of security. We have utilised the best programming language for solving the computational elements of the problem. We have achieved a level of reliability on a machine/solution that has a staggering number of moving parts.

What happens when a deployment works perfectly…