Yet Another WordPress Plugin Template

Yet Another WordPress Plugin Template (ya-wordpress-plugin-template) or (YAWPT)

I used to hate WordPress but I have found a way that I can deliver value through WordPress Plugin ShortCodes.

I have scoured the internet for all of the tidbits of information and put them together in such a way that I
can stamp out plugins quite quickly. I use this to deliver value to my clients without having to be responsible for the whole site, content, and look and feel.

The usual conversation goes like this:

  • Client: I would like a small website and it only has to do this (Insert Feature List)…
  • Me: Let me stop you there. When you say small, does that mean that the budget is small?
  • Client: Well yeah.
  • Me: Are you guys comfortable with WordPress?
  • Client: Yeah, that is what we have now.
  • Me: Well I can give you a WordPress Plugin that exposes a shortcode. Then I can connect
    through API to your primary system to deliver (Insert Feature List). Then your web developer
    can style the page however you like and just put the shortcode in the page where you want it.
  • Client: Sounds great. What about the look and feel of what you develop?
  • Me: I will create the initial templates, and there is a template editor built into the plugin
    so your web developer can style the templates as well. It will be fast to develop, and I will provide
    a test environment for you to play before I install it on your site.
  • Client: Wow, Great. What a great programmer you are! (ok, I just added the last part for my ego)

As you know the time suck with any project is the UI/UX and the styling. If you can deliver a short-code
then you can drop as much as 70% of this from the budget. (This is a guess, but UI takes time)

Delivery on WordPress is easier IT as well. Sometimes I would have to provide the IT, Servers, VM, or
whatever the customer needs, but there are many vendors providing WordPress hosting at
a reasonable price with backup options as well.

By the way, I do not claim that this is the best; it might even be the worst, but it works well for me.
I am happy to get your input suggestions feedback etc.

The code is on GitHub here

Contact me if you need help. My email is somewhere in the code.

Demo

Click here to see the demo in action

GCP, Terraform, CD & Bitbucket

I have recently been working with a company, let’s call them AA regularly delivers applications that deliver an intelligent interpretation of data using the Google Cloud Platform. AA Uses APIs to read data that gets fed into pipelines, the data is then interpreted by Machine Learning and Language Analysis before it is delivered to data storage, and then displayed using dashboards that read the data source. AA utilizes the skills of many developers. The solutions are developed using “the language that makes sense”, so it is not uncommon for solutions to have some Python, and some Golang. Databases will be a combination of NoSQL, MySQL, and BigQuery. There are queues that are used to flatten out resource usage. AA regularly deploys to many different environments.

How do we manage deployments?

There are so many disparate technologies that have to come together to form a solution. There are naming conventions that must be adhered to. There are potentially many programming languages in a single solution. There is a list of APIs that need to be enabled as long as my arm. The solution needs to have a database instance spun up. There will be schemes that need to be corrected. Any small error in the connecting technologies will result in the failure of a part of the system.

Terraform to the rescue?

There has to be a better way! Terraform is a programming language/system that delivers the whole system configuration as code. No need for the entry of commands through the command line. No need for the click of options through the web console. Everything can be done within the Terraform programming language. Terraform keeps the current state of the machine that it is interacting with a state file and only applies the programming configuration if the system needs it. It determines if there is a need by checking if there is a difference between the configuration code and the machine state.

Now the problems start. Terraform is not pure. Only some of the Google Cloud Functions operations are available in the terraform libraries. and the libraries sometimes have some “shortcomings”. So now we start to get into problems because we need some scripts run in conjunction with Terraform. We have to be careful with changing the state somewhere where Terraform will not be able to resolve it automatically. I’ll get back to this because the next problem needs to be introduced.

Multiple Instances

Terraform works great if we program it for just one instance. But we are deploying this to multiple Google Cloud Platform environments. We have a Dev instance (that gets polluted by developer testing), Staging instances, and Production. The production instances are of such high security that even the developers do not have access to them. So how do we do DevOps when developers cannot see the environment that they are deploying to?

The development team needs to write multiple Terraform scripts for all of the different environments. And have such high confidence in the deployment that it works sight unseen and with no opportunity to correct errors. How can we achieve this?

Hello Bitbucket Pipelines

With a good branching strategy and the use of Bitbucket Pipelines, we get the level of automation that we need for deployment. And because of the ability of Bitbucket to have protected environment variables, the pipelines can be set up to deploy securely.

Now we can have Developer Branches and Deployment branches. Developers can check into the development branch, the pipeline will run automatically to deploy and redeploy to the developers’ test environment. So far so good. But what about the duplication of the code for the different environments? How do you create and maintain multiple terraform scripts for deploying to dev, test, and production environments? And what happens when you want to deliver to a second prod environment, a third, etc?

Selectable Configuration Variables

Terraform allows for the configuration to be passed in as JSON variables. In fact, most of Terraform can be driven by variables. In most programming languages, concepts can be abstracted. This is done with functions in structural, and methods in object-oriented programming. Terraform has a similar sort of abstraction. The code below calls a sub-terraform script and passes in the variables..

<code>module "pubsub_topic_tweets" {
    source = "./google_pubsub_topic"
    pubsub_topic_name = var.pubsub_topic_tweets
}</code>

In the code above you can see that this calls a standard Terraform script google_pubsub_topic to set up a PubSub Queue. This style of Terraform programming really makes use of code blocks, reduces cut-and-paste errors, and standardizes the way elements are set up. This last sentence is very important. The use of these code blocks::

  • Reuses code
  • Reduces cut-and-paste errors
  • Standardizes the way that GCP elements are creates

We had a really smart developer come up with this, and my jaw dropped. Machine setup code that looks like a programming language function.

Now that we have a variable-driven approach we need to set up and select the variables.

The Environment Name decides the way

In our first trial of setting this up, we use “Dev” and “Test” for the names of the pipelines in the Bitbucket. This was a complete mess because we were constantly trying to map the google cloud platform name to the different pipelines. Then we decided that since the Google Cloud Platform was immutable we would name our pipelines after the environment. Then when we deployed to a new environment it would be a simple matter of following the existing pattern.

Everything then fell into place. The documentation was reduced. Adding to the environments and pipelines became intuitive. Developers that had not worked on the project were able to see the pattern quickly and then implement a new platform.

You can see here that the naming convention is the same as the name of the Google Cloud environment. So we were excited, we were pumped. What could we do next to reduce copy-paste? How could we improve our code further?

Terraform uses a variable.tf file to “declare” all the variables that would be used with the creation of the system. Our first attempt was to declare all the variables and then create a massive config.tfvars.json to set them up.

This would set up the name of the PubSubs, and the name of the cloud functions. So we had a massive JSON config and due to the nature of JSON (no variable substitution), we again had massive duplication of code. The answer came from a less experienced Terraform developer who could not accept duplication.

“Default” is little known but so useful

Terraform has a file variable.tf that allows you to declare all the variables that will be used. It has a little-known feature (ok, maybe it is well known, but we did not know about it) called Default. This looks like the following:

<code># Pubsub Topics
variable "pubsub_topic_searches" {
    type    = string
    default = "searches20"
}
variable "pubsub_topic_tweets" {
    type = string
    default = "tweets20"
}</code>

Now we can set up all the common elements that are the same between all deployments declared in our variables.tf as a default. Then we can remove 90% of our JSON and just keep the stuff that is different. What a refactor. It made the author so happy I did the programmer “happy dance” (in the privacy of my own office of course).

Terraform “Undocumented Features”

The Pipeline feature of Bitbucket is very powerful. It basically sets up a Docker machine and runs scripts. This meant that we could code for different situations, and handle some of the Terraform shortcomings since we get a chicken-and-egg situation with the Terraform state file. We were able to use GCP utility functions to check to see if the storage bucket existed. If it didn’t exist we knew that it was the first run and could set some environment variables accordingly. In the script to we could set up the APIs (something that is not done well in Terraform) so we were able to utilise the strengths of Terraform, and the strengths of bash scripting

Below is an example of the script that we use. Here you can see that we are using gsutil to get the storage bucket state and passing environment variables in on the Terraform Command line.

Within the scripts, we can use the gcloud command-line call to set up all the APIs that we need. This is all possible because we can download the Google Cloud Platform tools and install them in the Docker instance.

<code>echo "Starting…"
gcloud services enable appengine.googleapis.com
gcloud services enable bigquery.googleapis.com
gcloud services enable cloudbuild.googleapis.com</code>

Putting it all together

Putting this all together, we have the best of ALL worlds. Best automatic and manual deployment. Best of Terraform for automated machine deployment, and Bitbucket Pipelines for source control with DevOps. We have maintained an amazing level of security. We have utilised the best programming language for solving the computational elements of the problem. We have achieved a level of reliability on a machine/solution that has a staggering number of moving parts.

What happens when a deployment works perfectly…

Are You a Digital Hoarder?

Are You a Digital Hoarder? How does it affect Business?

I am first to admit that I have far too many photos these days. My phone fills up and then I realize that there are too many pictures to delete and then I say to myself “there is too many to deal with now, I will do it later” or “That looks like it could be nice, what if I want to look at it later”. Then I see the price of a new phone and I slash and burn photos and make the phone last another year.

Then I while flipping through the TV channels I see the TV show “Hoarders – Buried Alive” and a light goes on…

So, now the light is on, let’s shine it around and see what comes.

In the physical world, most normal functioning people are limited by space or income, and the choice of a cluttered environment or living in a hoard means that we

  • Do not want to live like that.
  • Do not have much space to collect junk.
  • Do not have the money to buy uncontrollably.

So in the physical world, we are kept in check by the realities of life. What if we have unlimited space, unlimited funds, and the ability to recall anything from the clutter without huge efforts of digging it out?

This is the world of the Digital Hoarder.

“Digital Hoarders can adversely affect business”

This is the world of the Digital Hoarder. Space in the cloud is cheap. If you do a search for “Free cloud storage” you are given lists of Google Drive, Amazon, and many other that are giving away space for free. Their business model is to convert you to a Digital Hoarder. “Why throw anything away”. As soon as you exceed your free limit you are hooked, and then you have to pay. Only a few bucks a month to get LOTS more storage. Google started this a while ago with a free email account that you never had to delete messages. 15GB of free storage in the cloud. Then you are given the option to backup your computer to the cloud. Apple is the same with iCloud backup. But all it is doing it allows you to stash your Hoard for “Later” and we all know that Later never comes. Later is too hard because the hoard has grown beyond normal control or because we just cannot bear the thought of throwing away the video of your little baby rolling over for the first time.

When you go online and search for “Free Cloud Storage” you are now a digital hoarder.

So how do we spot a digital hoarder in the workplace. Do functional digital hoarders have a detriment to business. Well not really, but there are a few situations that our company has come across that have adversely affected business and cost money to the business.

The Task Digital Hoarder

A mature business usually has a task or support system that keeps track of all outstanding tasks, bugs, projects etc. At a glance, the company leaders can see what work is outstanding, and what is being worked on. There are many versions of this. The more common systems called Jira. It tracks all of your tasks.

Our recommendation is 1 year

The task hoarder is someone that is reticent to throw away tasks. The backlog of tasks gets too long so it takes many hours and days to continue to go through the tasks and follow up with the creator of the task; is still relevant? Our recommendation is 1 year. If you have not acted on the task/bug/project in one year then you need to ask the question: “Is this really relevant to our business?” The person that created the task has not been pushing it? So it is just forgotten. Any task that is not acted on after a year is deleted. If it is vital then it is recreated with a more up-to-date description, and a better proposition of the business value and why it is important.

The cost to the business is; if this is not done is that you might be hiring staff to work on a large backlog of tickets that are just not that necessary. Slash and burn. Any ticket older than 1 year is not relevant anymore.

Code Hoarders

These are the developers that write software and while writing gets concerned with business needs in the future and then have commented out lines of code which ends up in production. You can tell a code hoarder from the amount of commented lines of code. This affects business as it paralyzes the next person that comes along. They see the commented out code and wonder if it was meant to be commented out? Did it get through the testing procedures this way? And a few dozen more questions. Then it compounds when the next guy does not fix or remove the code.

The obvious answer is version control software. It remembers everything. But developers must actively ensure that there are no lines of code in production software that have been commented out. There are some exceptions to this. For Example, periodic code. Job control changes for summertime. But it should have comments as to the reason that you have code commented out.

Environmental

Digital hoarding is not free. The cloud servers are computers that require power, and materials for storage. There are no problems now, but in 20 years when we have not deleted a single thing will we have the equivalent of a digital landfill…

Announcing TimeTrack – Time tracking & invoicing for Contractors

B2B Consultancy introduces TimeTrack.au – A tool for all freelance and contracting professionals to track their time, and send invoices.

TimeTrack.au

Get To Know TimeTrack

The TapTime software was written in 2014 to address my need to deliver accurate time reports to my clients.

Over the years I have continued to adjust and tweak the software so that it is easy to track time and send invoices. Now in 2023, there are over 6,000 timesheet entries and over 250 invoices I have created as a freelance software developer.

  • My clients love it because they can always see what I am working on.
  • Are you a freelance developer that charges per hour?
  • Do you want to keep accurate time tracking?
  • Do you want to send invoices to your clients?
  • Do you want to track your work with graphs and calendars?
  • Do you want a free solution for tracking time?
  • Only Pay when you get money from your clients?
  • TimeTrack is for you!!

Mention that you have read this article and I will give you an additional 5 invoices to try out TimeTrack

Project Delivered PSCB

Small win for B2B in Cambodia. We delivered on a small project to create a web site for our customer. The customer needed a promotion for his Pro Soccer Bets Club to go along with his tipping business over Telegram.

B2B Looked at his needs and was able to generate a site that the customer can manage. The customer was very happy as we delivered within his small budget and 2 weeks ahead of schedule. We registered the domain name, secured the hosting based on the highest density of customers and purchased a nice template

B2B will go into a maintenance role with the web site now, and ensure that SEO is maintained, and the backups are done.

Have a look! https://www.prosoccerbets.com

Easiest Nagios Extensions


Image by Gerd Altmann from Pixabay

There have been 2 scripts that have allowed me to extend Nagios more easily than almost any other monitoring configuration over the last 10 years. This has allowed me to create monitors within the applications that I have built and within existing applications. A few time it has helped me to solve complex monitoring systems where providers have provided ineffective documentation.

This article assumes that you have an understanding of Nagios.

Quite Simply Easy Monitoring

  • check_http_status.sh – allows me to write code within any URL that will return predefined strings  (STATE_OK,STATE_WARNING,STATE_CRITICAL) and a message that will help to determine the error.
  • check_http_content.sh – allows me to search a web page for a string. If that string does not exist then return an error.

Simple right? Does it exist? Maybe. Have I recreated something? Well maybe again. But have been using it for the last 10 years and it has stood the test of time. It is simple, easy to call, and uses existing infrastructure. Web programmers can make as many hooks as needed for the 

One time I was working for a telco, and the IDSN’s connections had a tendency to drop out at the most inconvenient times. We were always on the back foot and reactivating to the problem when our customers reported it to us. How to fix? This was old equipment and there was little documentation for the SNMP traps. BUT there was a web page that would have a red light icon when the ISDN lines would have a problem. The check_http_content.sh allowed me to search for the green icon (The monitor is listed below). Within half an hour I had solved all of our ISDN monitoring issues without having to sift through endless google searches trying to find the correct SNMP trap.

The other script that has been incredibly useful (check_http_status.sh) allows me to write hooks in all of the web apps. This means that all of the complex monitoring can be part of the web application itself (DevOps?)

Pros and Cons

The downside of this is that the monitoring server adds additional load on your web server. This can be controlled by the interval configuration in Nagios. It is a small price to pay to have such an easy to monitor in your systems. Anything can be monitored from processes, database sizes, event frequency, cash flow, service tickets. Anything that you can write a program for can not be monitored in Nagios. 

You have to consider the Security when you run write these scripts. It is not a problem for me as I was on a private network. You can control the access via whitelisting your monitoring server’s IP, or you can add some authentication to your scripts when you call curl.

If you need some assistance implementing this to your DevOps team, please contact us. 

nagiosCheckDatabase.php – Example web hook for checking that database exists. In this case an Oracle database

<?php
//nagiosCheckDatabase.php
require_once ( dirname ( __FILE__ ) . '/config.php' );

$dbName = Request::get ( 'HOST' );

$tab = new DBTable ( $dbName, 'SELECT SYSDATE FROM DUAL', null, DB::FETCH_NUM );

if ( ! $tab->ok() ) {
    echo "Unable to query Database STATE_CRITICAL";
}
else {
    echo "SYSDATE=" . $tab->getValue() . " - STATE_OK";
}

myservers.cfg – Example Service Configuration for Nagios for check_http_status and check_http_content

define service {
  use                   generic-service
  host_name             sydney-mpcsyd
  service_description   Job Results
  check_command         check_http_status!http://192.168.3.200:8080/LiveStats/nagiosCheckJobResults.php?HOST=mpcsyd
  normal_check_interval 60
  retry_check_interval  15
  max_check_attempts    3
}
define service{
  use                   generic-service
  host_name             sydney-rev-au-pocmp3
  service_description   ISDN OCMP3
  check_command         check_http_content!http://192.168.3.130:4242/this.BMPFFaultMgr?GetMapAction=HTML&LEVEL=TOP_LEVEL&TYPE=1&NAME=Root&DATE=0&LEV_NUM=0&LEV_NAME0=N0&LEV_NAME1=N1&LEV_NAME2=N2&LEV_NAME3=N3&LEV_TYPE0=T0&LEV_TYPE1=T1&LEV_TYPE2=T2&LEV_TYPE3=T3!greenISDNIcon.gif
}

commands.cfg – This is the Nagios configuration that connects the services to the scripts

define command {
  command_name check_http_status
  command_line /etc/nagios/scripts/check_http_status.sh '$ARG1$'
}
define command {
  command_name check_http_content
  command_line /etc/nagios/scripts/check_http_content.sh '$ARG1$' '$ARG2$'
}

check_http_status.sh

#! /bin/bash

STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4

if test -x /usr/bin/printf; then
	ECHO=/usr/bin/printf
else
	ECHO=echo
fi

URL=$1

RESP=`curl -s --connect-timeout 300 --retry 3 --silent -f $URL`
RES=$?

if [ "$RES" != "0" ]
then
    echo "Unable to connect to $URL ($RES)"
    exit $STATE_WARNING
else
    echo "$URL: $RESP"
    if echo $RESP | grep -q STATE_OK
    then
        exit $STATE_OK
    elif echo $RESP | grep -q STATE_WARNING
    then
        exit $STATE_WARNING
    elif echo $RESP | grep -q STATE_CRITICAL
    then
        exit $STATE_CRITICAL
    else
        exit $STATE_WARNING
    fi
fi

check_http_content.sh

#! /bin/bash

STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4

if test -x /usr/bin/printf; then
	ECHO=/usr/bin/printf
else
	ECHO=echo
fi

URL=$1
PROCESS=$2

RESP=`curl --silent -f $URL`
RES=$?

if [ "$RES" != "0" ]
then
    echo "Unable to connect to $URL ($RES)"
    exit $STATE_WARNING
else
    if echo $RESP | grep -q "$PROCESS"
    then
        echo "String ($PROCESS) exists in URL: $URL"
        exit $STATE_OK
    else
        echo "Could not find: String ($PROCESS) in URL: $URL"
        exit $STATE_CRITICAL
    fi
fi

What are the Basics?

Image by Gino Crescoli from pixabay.com

What are the Basics for a Software Company? Most simply 4 x M’s, 3 x $’s, 5 x W’s and 1 x A. Confused yet?

When the B2B Consultancy company is invited to help an organization, we always start with the basics. We ask questions surrounding the 4 x M’s. At the highest level, we look at the 4 areas of management before we dive deeper and try to address any problems. We look at 1. People Management, 2. Project Management, 3. Change Management, and 4. Version Management. Now you could say that we have 2 x M’s and 2 x C’s because you think of change control and version control. But the word “control” is a bit too aggressive when you talk about software development in a company. So I take a little bit of poetic license and say that the basics of any software company start with the 4 x M’s. There is also the added element of security that permeates every level of a company. But I will deal with that in another article

  1. People Management – I believe that people are the greatest asset of a software company as they work to transform ideas into software and software generates money. We must ensure that our people are well looked after and know what they are doing. People need to know their place in the company, and that they are making a difference and they are being rewarded for their efforts. We also look at typical HR to ensure that people are selected correctly. Then we look at the pragmatic elements of people management. Meeting structure, one-on-ones, seating layouts, tools, company perks. All the elements that make the people want to be at the company and the employees are productive.
  2. Project Management – A project is a chunk of work that the company decides needs to be put in place to make the company better. This can be anything from a new building or a new website or new software or advertising campaign, or anything. A project is at a high level something that people with do in the company to make it better. As most companies are resource-limited, not all projects can be done at once. When deciding what projects must be completed, there are some very basic considerations. I call these the 3 x D’s or 3 x $’s – 1. Does it make money? 2. Does it save money? 3. Does it improve the goodwill of the company (hard to put a dollar amount on)? At the most basic level, you can rate every project into these 3 categories you have an objective way to determine the priorities of the company. Obviously, it is not this easy, and you have to be careful of investing too much time in determining the value of the 3 x $’s on a project that is a low priority. There is also a consideration of the ROI on 3 x $’s so that the time of implementation is considered. But at the highest level, an objective priority helps to focus a companies people on what is important. This style of project management is compatible with agile methodologies for software development and can be pushed down to further break down a project.
  3. Change Management – Everything changes, infrastructure, code specifications, business priorities, security, EVERYTHING. How do you manage that change? Many of the compliance standards (ISO9000, PCI-DSS) audit this. So when we look at this we look at all the elements of the project and what the company does. This gives guidance as to how change is being handled. There are so many ways of doing change management but when you look at what the company does to make money then you get an understanding of the changes. And that helps to answer the next questions of Version management. Most change management follows the 5 x W’s 1. Who requested the change? 2. Why did they request the change? 3. What will change? 4. When will it go into effect? 5. Who will implement the change? The change management systems can track this information, then, at a high level, you have change management under control.
  4. Version Management – is driven largely by the change management systems that you have in place and tools. Because I have a software background I like to have all version control as code. I think that IT Infrustruction should be code (ansible) and, the database objects should be stored as code (I will talk more about this in another article), and computer software should be stored as code. Documents do not fit so well into version control tools such as Git. There are many version control systems supporting different sets of tools. I personally like Git as does the software industry. I like Document collaboration tools such as GSuite. But there are different toolsets based on company requirements. The choice of the Version Management system depends on 1 x A – What are the artefacts of the company. Usually, it comes down to 2 version management systems and ensuring that we have a 90% fit of the artefact with the version management tool.

Is your company addressing the Basics?

Back 2 Basics Programming Learn More

Git Branching

Firstly, credit where credit is due. This image was republished with permission from Vincent Driessen, which first appeared here. This insight into Git branching strategies has been applied to many organisations and software development projects. Thanks Vincent At B2B we encourage best practice of programming techniques that suit your development style, your products and your company.

Continuous Integration, Continuous Delivery and unit testing are very important. The techniques and tools that we work with are many and varied. You will not, for example, find any religious zealots advocating full TDD for your company. We will talk about code coverage for unit testing, and apply some guidelines and pragmatic steps to achieve great results. Very soon your team will be developing faster, more reliable and loving it!