Blog

Humans are smarter than any type of AI – for now…

Despite all the technological advancements, can machines today only achieve the first two of the thee AI objectives. AI capabilities are at least equalling and in most cases exceeding humans in capturing information and determining what is happening. When it comes to real understanding, machines still fall short – but for how long?

In the blog post, “Artificial Intelligence Capabilities”, we explored the three objectives of AI and its capabilities – to recap:

AI-8Capabilities

  • Capturing Information
    • 1. Image Recognition
    • 2. Speech Recognition
    • 3. Data Search
    • 4. Data Patterns
  • Determine what is happening
    • 5. Language Understanding
    • 6. Thought/Decision Process
    • 7. Prediction
  • Understand why it is happening
    • 8. Understanding

To execute these capabilities, AI are leaning heavily on three technology areas (enablers):

  • Data collecting devices i.e. mobile phones and IoT
  • Processing Power
  • Storage

AI rely on large amounts of data that requires storage and powerful processors to analyse data and calculate results through complex argorythms – resources that were very expensive until recent years. With technology enhancements in machine computing power following Moore’s law and the now mainstream availability of cloud computing & storage, in conjunction with the fact that there are more mobile phones on the planet than humans, really enabled AI to come to forefront of innovation.

AI_takes_over

AI at the forefront of Innovation – Here is some interesting facts to demonstrate this point:

  • Amazon uses machine learning systems to recommend products to customers on its e-commerce platform. AI help’s it determine which deals to offer and when, and influences many aspects of the business.
  • A PwC report estimates that AI will contribute $15.7 trillion to the global economy by 2030. AI will make products and services better, and it’s expected to boost GDP’S globally.
  • The self-driving car market is expected to be worth $127 billion worldwide by 2027. AI is at the heart of the technology to make this happen. NVIDIA created its own computer — the Drive PX Pegasus — specifically for driverless cars and powered by the company’s AI and GPUs. It starts shipping this year, and 25 automakers and tech companies have already placed orders.
  • Scientists believed that we are still years away from AI being able to win at the ancient game of Go, regarded as the most complex human game. Recently Google’s AI recently beat the world’s best Go player.

To date computer hardware followed a growth curve called Moore’s law, in which power and efficiency double every two years. Combine this with recent improvements in software algorithms and the growth is becoming more explosive. Some researchers expect artificial intelligence systems to be only one-tenth as smart as a human by 2035. Things may start to get a little awkward around 2060 when AI could start performing nearly all the tasks humans do — and doing them much better.

Using AI in your business

Artificial intelligence has so much potential across so many different industries, it can be hard for businesses, looking to profit from it, to know where to start.

By understanding the AI capabilities, this technology becomes more accessible to businesses who want to benefit from it. With this knowledge you can now take the next step:

  1. Knowing your business, identify the right AI capabilities to enhance and/or transform your business operations, products and/or services.
  2. Look at what AI vendors with a critical eye, understanding what AI capabilities are actually offered within their products.
  3. Understand the limitations of AI and be realistic if alternative solutions won’t be a better fit.

In a future post we’ll explore some real life examples of the AI capabilities in action.

 

Also read:

GANTT Charts

A Gantt chart is a horizontal bar chart developed as a production control tool in 1917 by Henry L. Gantt, an American engineer and social scientist. Frequently used in project management, a Gantt chart provides a graphical illustration of a schedule that helps to plan, coordinate, and track specific tasks in a project.

Gantt charts may be simple versions created on graph paper or more complex automated versions created using project management applications such as Microsoft Project or Excel.

A Gantt chart is constructed with a horizontal axis representing the total time span of the project, broken down into increments (for example, days, weeks, or months) and a vertical axis representing the tasks that make up the project (for example, if the project is outfitting your computer with new software, the major tasks involved might be: conduct research, choose software, install software). Horizontal bars of varying lengths represent the sequences, timing, and time span for each task. Using the same example, you would put “conduct research” at the top of the verticle axis and draw a bar on the graph that represents the amount of time you expect to spend on the research, and then enter the other tasks below the first one and representative bars at the points in time when you expect to undertake them. The bar spans may overlap, as, for example, you may conduct research and choose software during the same time span. As the project progresses, secondary bars, arrowheads, or darkened bars may be added to indicate completed tasks, or the portions of tasks that have been completed. A vertical line is used to represent the report date.

Gantt charts give a clear illustration of project status, but one problem with them is that they don’t indicate task dependencies – you cannot tell how one task falling behind schedule affects other tasks. The PERT Chart, another popular project management charting method, is designed to do this. Automated Gantt charts store more information about tasks, such as the individuals assigned to specific tasks, and notes about the procedures. They also offer the benefit of being easy to change, which is helpful. Charts may be adjusted frequently to reflect the actual status of project tasks as, almost inevitably, they diverge from the original plan.

Also Read…

Management Communication Plan

PERT Charts

A PERT chart is a project management tool used to schedule, organize, and coordinate tasks within a project. PERT stands for Program Evaluation Review Technique, a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine missile program. A similar methodology, the Critical Path Method (CPM) was developed for project management in the private sector at about the same time.

PERT Chart 1

A PERT chart presents a graphic illustration of a project as a network diagram consisting of numbered nodes (either circles or rectangles) representing events, or milestones in the project linked by labelled vectors (directional lines) representing tasks in the project. The direction of the arrows on the lines indicates the sequence of tasks. In the diagram, for example, the tasks between nodes 1, 2, 4, 8, 9 and 10 must be completed in sequence. These are called dependent or serial tasks. The tasks between nodes 2 and 3, and nodes 2 and 4 are not dependent on the completion of one to start the other and can be undertaken simultaneously. These tasks are called  parallel or concurrent tasks. Tasks that must be completed in sequence but that don’t require resources or completion time are considered to have event dependency. These are represented by dotted lines with arrows and are called dummy activities. For example, the dashed arrow linking nodes 6 and 9 indicates that the system files must be converted before the user test can take place, but that the resources and time required to prepare for the user test (writing the user manual and user training) are on another path. Numbers on the opposite sides of the vectors indicate the time allotted for the task.

The PERT chart is sometimes preferred over the Gantt Chart, another popular project management charting method, because it clearly illustrates task dependencies. On the other hand, the PERT chart can be much more difficult to interpret, especially on complex projects. Frequently, project managers use both techniques.

Also Read…

Management Communication Plan

Project Failure? How to Recover and/or Prevent…

Statistics indicate that 68% of all IT projects are bound to failure!

The PMI’s definition of a high-performing organisation, is a company that completes 80% or more projects on time, on budget, and meeting original goals. In a low-performing organization, only 60% or fewer projects hit the same marks.

Projects fail for all kinds of reasons:

  • Stakeholders can change their objectives
  • Key team members can leave for other companies
  • Budgets can disappear
  • Materials/Vendors can be delayed
  • Priorities can go un-managed
  • Running out of time
  • …and others

In this post:

> How to prevent project failure (with some statistics)

> How to recover a failing project

How to prevent project failure

Prevention is the best cure, so what can you do to prevent projects from failing? Here is some statistics…

  • Organisations that invest in proven project management practices waste 28 times less money because more of their strategic initiatives are completed successfully.
    Source: PMI’s Pulse of the Profession Survey, 2017.
  • 77% of high-performing organizations have actively-engaged project sponsors, while only 44% of low-performing organizations do.
    Source: PMI’s Pulse of the Profession Survey, 2017.
  • 46% of CIOs say that one of the main reasons IT projects fail is weak ownership.
    Source: The Harvey Nash/KPMG CIO Survey, 2017.
  • 33% of IT projects fail because senior management doesn’t get involved and requirements/scope change mid-way through the project.
    Source: A Replicated Survey of IT Software Project Failures by Khaled El Emam and A. Güneş Koru, 2008.
  • 78% of respondents feel that business is out of sync with project requirements and business stakeholders need to be more involved in the requirements process.
    Source: Doomed from the Start Industry Survey by Geneca, 2011.
  • 45% of the managers surveyed say business objectives are unclear to them.
    Source: Doomed from the Start Industry Survey by Geneca, 2011.
  • Companies that align their enterprise-wide PMO (project management office) to strategyhad 38% more projects meet original goals than those that did not. They also had 33% fewer projects deemed failures.
    Source: PMI’s Pulse of the Profession Survey, 2017.
  • 40% of CIOs say that some of the main reasons IT projects fail is an overly optimistic approach and unclear objectives.
    Souce: The Harvey Nash/KPMG CIO Survey, 2017.
  • Poor estimation during the planning phase continues to be the largest (32%) contributor to IT project failures.
    Source: PwC 15th Annual Global CEO Survey, 2012.
  • Projects with effective communication are almost twice as likely to successfully deliver project scope and meet quality standards than projects without effective communication (68% vs 32% and 66% vs 33%, respectively.)
    Source: PwC 15th Annual Global CEO Survey, 2012.

How to recover a failing project

These statistics show that the odds are not in your favour. It is inevitable that you will have to deal with a failing project or two, some time within your career… You could turn the odds in your favour by taking action in recovering failing projects.

Here are four steps you can use that could save a failing project — backed up by original research from GartneriSixSigmaPMI Project Zone CongressThe Institution of Engineering and Technology, and Government CIO MagazineFollow these four steps and salvage your failing project!

Step 1: Stop and Evaluate

Step 1 – Big action items:

  • Issue a “stop work” order
  • Talk with everyone

Metrics/Indicators: The right project Management Information (MI) should give you the needed early warning signs when things are not going according to plan and heading to failure. These signs should drive you to action, as rescuing a failing a project is not a task to be sneezed at. It takes planning, and the process can consume weeks of key resources time and effort.

People: To help ease the pain of stopping a project, work with the team members’ managers (resource owners) to identify and assign interim work. As people are your most important asset, it is important to keep them productively engaged while you are evaluating and re-planning your project recovery.

Project artefacts/deliverables: Make sure all the project artefacts and deliverables are safely stored where it cannot be tampered with for the interim period.

Communicate: (clear, concise, and concrete) – Communicate to your team why their project is on hold. Spend the needed time to learn as much as you can about each team member’s opinions of the project and of each other. Learning that their project will be put on hold will inevitably create distrust. Transparency and tailored messaging is the best way to mitigate bad feelings. See blog posts “Management Communication Plan” and  “Effective Leadership Communication

Project/Delivery Manager (You):Check your ego. Go to the major stakeholders and ask for anonymous feedback on their view of the overall project. When evaluating their responses, don’t forget to consider company culture and politics and how those factors may have played a role in forming the stakeholders’ opinions.

Step 2: Why your project is failing – Root causes

Step 2 – Big action items:

  • Establish allowable solutions for project rescue (including project termination)
  • Identify root causes of the problem
  • Identify risks to project continuation

Determine the root causes: Most times the cause of project problems is not immediately obvious. Even the best project managers — those with excellent project plans, appropriate budgets, and fantastic scope control — also struggle, on occasion, with project failure.

You’ll only get to the bottom of it by doing a Route Cause Analysis (RCA) and the “5 Why’s” technique can help with that. See “The 5 Whys for route cause analysis

Surface-level answers are often the temptation when project managers reach this step. They might focus on the complexity of their project, their outdated project management softwareor methodology, their unclear objectives or their stakeholders’ lack of involvement. All of these problems are so generic that they don’t provide enough insight to create real solutions.

Apply the “5 Whys” and be specific when answering these questions… i.e.

  • Why are objectives unclear?
  • Why aren’t users getting involved?
  • Why are the estimates wrong?

Of course, some of these answers may be hard to hear, and solutions can range from the challenging to impossible. Remember: if these issues could be easily remedied, they would have been addressed and resolved. Even simple problems — like a team member leaving — can take months to fix. Ask yourself: are you using the right technology for the job? Are your dependencies so external that project control is simply out of your hands?

If you’re still struggling to figure out where the root of your project failure is, consider these seven issues – the most common causes of project failure.

  • Complexity
  • External
  • Financial
  • Operational
  • Organizational
  • Schedule
  • Technology

Risk Assessment:What are the risks when trying to salvage the project? Are those risks worth it? Is the project salvageable? Answer these questions before moving on.

Step 3: War Room

Step 3 – Big action items:

  • Set up the war room
  • Re-engage stakeholders
  • Create a tentative plan to move forward

Okay, General!

Assemble the team, seat them all together, and work through a rescue workshop. You’re in the mentality of “kill” or “fix” you’re done fact finding, asking question for further research, or finding other excuses to delay the process. That should all have been done in step two. You’re focussed to figure out what to do with your project.

The “war room” will be intense – all members need to be prepared and the right mindset  of problem solving!

The decision-making process could take two hours or several days. All key decision makers must be present. As this is not always possible some executives may prefer to be called in as the meeting is nearing its end, where team members can present prepared options.

To get the most out of the workshop, conduct the meeting face to face (take the meeting offline). Try to limit the meeting to ten people, including the most important stakeholders (like the sponsors), project manager, senior team members including technical representative to give insight to plan feasibility.

The war room is serious business –  prepare for it. Create an agenda to go over findings, from quantitative reporting to team member interviews. Encourage pre-war-room collaboration (covering the outcomes of steps 1 and 2) toward the ideal shared result.

When you start the war room meeting, all project material should be readily available. That’s your fact base driving factual data driven assumptions and decisions.

Using the facts, the purpose of the war room, in essence, is to answer three deceptively complex questions:

  • Is the business case still valid?
  • If the business case is no longer valid, is there potential for a new, reimagined, justified business case?
  • (If so): Are the added costs for project rescue worth it?

Encourage your task force to focus on identifying the project’s primary drivers (i.e. business need/value, budget, schedule, scope, or quality). Ideally, there should only be one driver that controls the outcome of the project – this is usually the business need for the project’s deliverables.

Sometimes the primary driver is beyond repair. For example, if the core due date has passed and it was aligned with a market cycle (ex: Black Friday to Christmas), then the project is irremediable.

Least case scenario: Clearly articulate the primary goal. Then identify what the team can do with the least amount of effort. Develop a scenario that costs the company the least and gets closest to achieve the primary goal.

Project termination considerations: If the primary goal cannot be achieved, prepare a recommendation to terminate the project… but not without scrutiny. Several variables must be considered and thoroughly addressed in the war room.

  • Consider trade-offs that could make the worst-case scenario more possible than originally thought.
  • Think about the potential backlash from killing a project.
    • How does that decision affect business strategy?
    • Other projects?
    • Public perceptions?
    • Potential future clients? All these variables must be considered and thoroughly addressed in the war room.

Alternatives: Should the least-case scenario makes sense, explore more alternatives. Are there alternative options that can deliver more of the project’s objectives, and consider how adding those solutions to your plan can create additional potential scenarios — positive or negative.

New project charter: Write down the main points of your plan in a revised project charter.

Replacement project option: It’s not uncommon for stakeholders to propose a replacement project instead of a rescue. That’s a totally viable option — kill the project, salvaging only essential, functional portions of the original attempt, and work to create a new plan.If the decision is to completely start over, abandon project rescue altogether. Justify the replacement project on its own merit (a new scope, budget, resource plan, etc.)

Step 4: Set your project in motion

Step 4 – Big action items:

  • Finalise how your project will move forward
  • Confirm responsibilities
  • Reset organizational expectations.

Following your war room meeting, your next steps are all about follow-up. The real project rescue starts here and is the most challenging part of project rescue.

Re-engage stakeholders around the contents of the new project plan and complete the detail with precise commitments for each team member. Plans should be finalised within two days.

Be careful as hesitation and procrastination can limit team commitment and lower morale. You’re the general; get your troops ready to re-engage and to stay committed and focussed!

Reconfirm all project metrics: Validate all project aspects especially resources, as people has been allocation to productive work while you were reworking your rescue plan.

As the project rolls forward, be sure to detail the new project’s profile, scope, and size to the core team and beyond. Emphasize expected outcomes and explain how this project aligns with the company’s goals. Don’t shy away from communicating what these changes can mean on a big-picture scale. While you may receive some feedback, be direct: the project is proceeding.

Make sure all communication is clear. Confirm that stakeholders accept their new responsibilities to the project.

Cyber-Security 101 for Business Owners

Running a business require skill with multiple things happening simultaneously that require your attention. One of those critical things is cyber-security – critical today to have your focus on.

In the digital world today, all businesses have a dependency on the Internet in one way or the other… For SMEs (Small Medium Enterprise) that uses the Internet exclusively as their sales channel the Internet is not only a source of opportunity but the lifeblood of the organisation. An enterprise has the ability, through the Internet, to operate 24×7 with digitally an enabled workforce bringing unprecedented business value.

Like any opportunity though, this also comes with a level of risk that must be mitigated and continuously governed, not just by the board but also by every member within the team. Some of these risks can have a seriously detrimental impact to the business, ranging from financial and data loss to downtime and reputational damage. It is therefore your duty ensuring your IT network is fully protected and secure to protect your business.

Statistics show that cybercrime is exponentially rising. This is mainly due to enhancements in technology enabling and giving access to inexpensive but sophisticated tools. Used by experienced and inexperienced cyber criminals alike, this is causing havoc across networks resulting in business downtime that costs the economy millions every year.

If your business is not trading for 100 hours, what is the financial and reputational impact? That could be the downtime caused by, for example, a ransomware attack – yes, that’s almost 5 days of no business, costly for any business!

Understanding the threat

Cyber threats take many forms and is an academic subject on it’s own. So where do you start?

First you need to understand the threat before you can take preventative action.

Definition: Cyber security or information technology security are the techniques of protecting computers, networks, programs and data from unauthorized access or attacks that are aimed for exploitation.

A good start is to understand the following cyber threats:

  • Malware
  • Worms
  • Trojans
  • IoT (Internet of Things)
  • Crypto-jacking

Malware

Definition:Malware (a portmanteau for malicious software) is any software intentionally designed to cause damage to a computer, server, client, or computer network.

During 2nd Q’18, the VPNFilter malware reportedly infected more than half a million small business routers and NAS devices and malware is still one of the top risks for SMEs. With the ability of data exfiltration back to the attackers, businesses are at risk of the loss of sensitive information such as usernames and passwords.

Potentially these attacks can remain hidden and undetected. Businesses can overcome these styles of attacks by employing an advanced threat prevention solution for their endpoints (i.e. user PCs). A layered approach with multiple detection techniques will give businesses full attack chain protection as well as reducing the complexity and costs associated with the deployment of multiple individual solutions.

Worms

Definition:A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. Often, it uses a computer network to spread itself, relying on security failures on the target computer to access it.

Recent attacks, including WannaCry and Trickbot, used worm functionality to spread malware. The worm approach tends to make more noise and can be detected faster, but it has the ability to affect a large number of victims very quickly.For businesses, this may mean your entire team can be impacted (spreading to every endpoint in the network) before the attack can be stopped.

Approximately 20% of UK businesses that had been infected with malware had to cease business operations immediately resulting in lost revenue.

Internet of Things (IoT)

Definition:The Internet of things (IoT) is the network of devices such as vehicles, and home appliances that contain electronics, software, actuators, and connectivity.

More devices are able to connect directly to the web, which has a number of benefits, including greater connectivity, meaning better data and analytics. However, various threats and business risks are lurking in the use of these devices, including data loss, data manipulation and unauthorised access to devices leading to access to the network, etc.

To mitigate this threat, devices should have strict authentication, limited access and heavily monitored device-to-device communications. Crucially, these devices will need to be encrypted – a responsibility that is likely to be driven by third-party security providers but should to be enforced by businesses as part of their cyber-security policies and standard operating procedures.

Cryptojacking

Definition:Cryptojacking is defined as the secret use of your computing device to mine cryptocurrency. Cryptojacking used to be confined to the victim unknowingly installing a program that secretly mines cryptocurrency.

With the introduction and rise in popularity and value of crypto currencies, cryptojacking emerged as a cyber-security threat. On the surface, cryptomining may not seem particularly malicious or damaging, however, the costs that it can incur are. If the cryptomining script gets into servers, it can send energy bills through the roof or, if you find it has reached your cloud servers, can hike up usage bills (the biggest commercial concern for IT operations utilising cloud computing). It can also pose a potential threat to your computer hardware from overloading CPUs.

A recent survey, 1 in 3 of all UK businesses were hit by cryptojacking with statistics rising.

Mitigating the risk 

With these few simple and easy steps you can make a good start in protecting your business:

  • Education: At the core of any cyber-security protection plan, there needs to be an education campaign for all in the business. They must understand the gravity of the threat posed – regular training sessions can help here. And this shouldn’t be viewed as a one-off box-ticking exercise then forgotten about. Having rolling, regularly updated training sessions will ensure that staff members are aware of the changing threats and how they can best be avoided.
  • Endpoint protection: Adopt a layered approach to cyber security and deploy endpoint protection that monitor processes in real-time and seek out suspicious patterns, enhancing threat hunting capabilities that eliminate threats (quarantine or delete), and reducing the downtime and impact of attacks.
  • Lead by example: Cyber-security awareness should come from the top down. The time is long gone where cyber-security has been the domain of IT teams. If you are a business stakeholder, you need to lead by example by promoting and practicing a security-first mindset.

Different Software Testing – Explained

Testing of software and application is an integral part of the software development and deployment lifecycle. But with so many different types of tests to choose from when compiling your test approach, which are best suited for your requirements?

In this post 45 different tests are explained.

Software Application Testing are conducted within two domains: Functional and Non-Functional Testing.

Functional testing is a software testing process used within softwaredevelopment in which software is tested to ensure that it conforms with all requirements. Functional testing is a way of checking software to ensure that it has all the required functionality that’s specified within its functional requirements.

Functional testing types include:

  • Unit testing
  • Integration testing
  • System testing
  • Sanity testing
  • Smoke testing
  • Interface testing
  • Regression testing
  • Beta/Acceptance testing

Non-functional testing is defined as a type of Software testing to check non-functional aspects (performance, usability, reliability, etc) of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed by functional testing.

Non-functional testing types include:

  • Performance Testing
  • Load testing
  • Stress testing
  • Volume testing
  • Security testing
  • Compatibility testing
  • Install testing
  • Recovery testing
  • Reliability testing
  • Usability testing
  • Compliance testing
  • Localization testing

45 Different types of testing – explained

  1. Alpha Testing

It is the most common type of testing used in the Software industry. The objective of this testing is to identify all possible issues or defects before releasing it into the market or to the user. Alpha testing is carried out at the end of the software development phase but before the Beta Testing. Still, minor design changes may be made as a result of such testing. Alpha testing is conducted at the developer’s site. In-house virtual user environment can be created for this type of testing.

  1. Acceptance Testing

An acceptance test is performed by the client and verifies whether the end to end the flow of the system is as per the business requirements or not and if it is as per the needs of the end user. Client accepts the software only when all the features and functionalities work as expected. It is the last phase of the testing, after which the software goes into production. This is also called as User Acceptance Testing (UAT).

  1. Ad-hoc Testing

The name itself suggests that this testing is performed on an ad-hoc basis i.e. with no reference to test case and also without any plan or documentation in place for such type of testing. The objective of this testing is to find the defects and break the application by executing any flow of the application or any random functionality.

Ad-hoc testing is an informal way of finding defects and can be performed by anyone in the project. It is difficult to identify defects without a test case but sometimes it is possible that defects found during ad-hoc testing might not have been identified using existing test cases.

  1. Accessibility Testing

The aim of accessibility testing is to determine whether the software or application is accessible for disabled people or not. Here disability means deaf, color blind, mentally disabled, blind, old age and other disabled groups. Various checks are performed such as font size for visually disabled, color and contrast for color blindness etc.

  1. Beta Testing

Beta Testing is a formal type of software testing which is carried out by the customer. It is performed in Real Environment before releasing the product to the market for the actual end users. Beta testing is carried out to ensure that there are no major failures in the software or product and it satisfies the business requirements from an end-user perspective. Beta testing is successful when the customer accepts the software.

Usually, this testing is typically done by end-users or others. It is the final testing done before releasing an application for commercial purpose. Usually, the Beta version of the software or product released is limited to a certain number of users in a specific area. So end user actually uses the software and shares the feedback to the company. Company then takes necessary action before releasing the software to the worldwide.

  1. Back-end Testing

Whenever an input or data is entered on front-end application, it stores in the database and the testing of such database is known as Database Testing or Backend testing. There are different databases like SQL Server, MySQL, and Oracle etc. Database testing involves testing of table structure, schema, stored procedure, data structure and so on.

In back-end testing GUI is not involved, testers are directly connected to the database with proper access and testers can easily verify data by running a few queries on the database. There can be issues identified like data loss, deadlock, data corruption etc during this back-end testing and these issues are critical to fixing before the system goes live into the production environment

  1. Browser Compatibility Testing

It is a subtype of Compatibility Testing (which is explained below) and is performed by the testing team.

Browser Compatibility Testing is performed for web applications and it ensures that the software can run with the combination of different browser and operating system. This type of testing also validates whether web application runs on all versions of all browsers or not.

  1. Backward Compatibility Testing

It is a type of testing which validates whether the newly developed software or updated software works well with older version of the environment or not.

Backward Compatibility Testing checks whether the new version of the software works properly with file format created by older version of the software; it also works well with data tables, data files, data structure created by older version of that software. If any of the software is updated then it should work well on top of the previous version of that software.

  1. Black Box Testing

Internal system design is not considered in this type of testing. Tests are based on the requirements and functionality.

Detailed information about the advantages, disadvantages, and types of Black box testing can be seen here.

  1. Boundary Value Testing

This type of testing checks the behavior of the application at the boundary level.

Boundary value Testing is performed for checking if defects exist at boundary values. Boundary value testing is used for testing a different range of numbers. There is an upper and lower boundary for each range and testing is performed on these boundary values.

If testing requires a test range of numbers from 1 to 500 then Boundary Value Testing is performed on values at 0, 1, 2, 499, 500 and 501.

  1. Branch Testing

It is a type of white box testing and is carried out during unit testing. Branch Testing, the name itself suggests that the code is tested thoroughly by traversing at every branch.

  1. Comparison Testing

Comparison of a product’s strength and weaknesses with its previous versions or other similar products is termed as Comparison Testing.

  1. Compatibility Testing

It is a testing type in which it validates how software behaves and runs in a different environment, web servers, hardware, and network environment. Compatibility testing ensures that software can run on a different configuration, different database, different browsers and their versions. Compatibility testing is performed by the testing team.

  1. Component Testing

It is mostly performed by developers after the completion of unit testing. Component Testing involves testing of multiple functionalities as a single code and its objective is to identify if any defect exists after connecting those multiple functionalities with each other.

  1. End-to-End Testing

Similar to system testing, End-to-end testing involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

  1. Equivalence Partitioning

It is a testing technique and a type of Black Box Testing. During this equivalence partitioning, a set of group is selected and a few values or numbers are picked up for testing. It is understood that all values from that group generate the same output. The aim of this testing is to remove redundant test cases within a specific group which generates the same output but not any defect.

Suppose, application accepts values between -10 to +10 so using equivalence partitioning the values picked up for testing are zero, one positive value, one negative value. So the Equivalence Partitioning for this testing is: -10 to -1, 0, and 1 to 10.

  1. Example Testing

It means real-time testing. Example testing includes the real-time scenario, it also involves the scenarios based on the experience of the testers.

  1. Exploratory Testing

Exploratory Testing is an informal testing performed by the testing team. The objective of this testing is to explore the application and looking for defects that exist in the application. Sometimes it may happen that during this testing major defect discovered can even cause system failure.

During exploratory testing, it is advisable to keep a track of what flow you have tested and what activity you did before the start of the specific flow.

An exploratory testing technique is performed without documentation and test cases.

  1. Functional Testing

This type of testing ignores the internal parts and focuses only on the output to check if it is as per the requirement or not. It is a Black-box type testing geared to the functional requirements of an application. For detailed information about Functional Testing click here.

  1. Graphical User Interface (GUI) Testing

The objective of this GUI testing is to validate the GUI as per the business requirement. The expected GUI of the application is mentioned in the Detailed Design Document and GUI mockup screens.

The GUI testing includes the size of the buttons and input field present on the screen, alignment of all text, tables and content in the tables.

It also validates the menu of the application, after selecting different menu and menu items, it validates that the page does not fluctuate and the alignment remains same after hovering the mouse on the menu or sub-menu.

  1. Gorilla Testing

Gorilla Testing is a testing type performed by a tester and sometimes by developer the as well. In Gorilla Testing, one module or the functionality in the module is tested thoroughly and heavily. The objective of this testing is to check the robustness of the application.

  1. Happy Path Testing

The objective of Happy Path Testing is to test an application successfully on a positive flow. It does not look for negative or error conditions. The focus is only on the valid and positive inputs through which application generates the expected output.

  1. Incremental Integration Testing

Incremental Integration Testing is a Bottom-up approach for testing i.e continuous testing of an application when a new functionality is added. Application functionality and modules should be independent enough to test separately. This is done by programmers or by testers.

  1. Install/Uninstall Testing

Installation and uninstallation testing is done on full, partial, or upgrade install/uninstall processes on different operating systems under different hardware or software environment.

  1. Integration Testing

Testing of all integrated modules to verify the combined functionality after integration is termed as Integration Testing. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

  1. Load Testing

It is a type of non-functional testing and the objective of Load testing is to check how much of load or maximum workload a system can handle without any performance degradation.

Load testing helps to find the maximum capacity of the system under specific load and any issues that cause the software performance degradation. Load testing is performed using tools like JMeter, LoadRunner, WebLoad, Silk performer etc.

  1. Monkey Testing

Monkey testing is carried out by a tester assuming that if the monkey uses the application then how random input, values will be entered by the Monkey without any knowledge or understanding of the application. The objective of Monkey Testing is to check if an application or system gets crashed by providing random input values/data. Monkey Testing is performed randomly and no test cases are scripted and it is not necessary to

Monkey Testing is performed randomly and no test cases are scripted and it is not necessary to be aware of the full functionality of the system.

  1. Mutation Testing

Mutation Testing is a type of white box testing in which the source code of one of the program is changed and verifies whether the existing test cases can identify these defects in the system. The change in the program source code is very minimal so that it does not impact the entire application, only the specific area having the impact and the related test cases should able to identify those errors in the system.

  1. Negative Testing

Testers having the mindset of “attitude to break” and using negative testing they validate that if system or application breaks. A negative testing technique is performed using incorrect data, invalid data or input. It validates that if the system throws an error of invalid input and behaves as expected.

  1. Non-Functional Testing

It is a type of testing for which every organization having a separate team which usually called as Non-Functional Test (NFT) team or Performance team.

Non-functional testing involves testing of non-functional requirements such as Load Testing, Stress Testing, Security, Volume, Recovery Testing etc. The objective of NFT testing is to ensure whether the response time of software or application is quick enough as per the business requirement.

It should not take much time to load any page or system and should sustain during peak load.

  1. Performance Testing

This term is often used interchangeably with ‘stress’ and ‘load’ testing. Performance Testing is done to check whether the system meets the performance requirements. Different performance and load tools are used to do this testing.

  1. Recovery Testing

It is a type of testing which validates that how well the application or system recovers from crashes or disasters.

Recovery testing determines if the system is able to continue the operation after a disaster. Assume that application is receiving data through the network cable and suddenly that network cable has been unplugged. Sometime later, plug the network cable; then the system should start receiving data from where it lost the connection due to network cable unplugged.

  1. Regression Testing

Testing an application as a whole for the modification in any module or functionality is termed as Regression Testing. It is difficult to cover all the system in Regression Testing, so typically automation testing tools are used for these types of testing.

  1. Risk-Based Testing (RBT)

In Risk Based Testing, the functionalities or requirements are tested based on their priority. Risk-based testing includes testing of highly critical functionality, which has the highest impact on business and in which the probability of failure is very high. The priority decision is based on the business need, so once priority is set for all functionalities then high priority functionality or test cases are executed first followed by medium and then low priority functionalities.

The low priority functionality may be tested or not tested based on the available time. The Risk-based testing is carried out if there is insufficient time available to test entire software and software needs to be implemented on time without any delay. This approach is followed only by the discussion and approval of the client and senior management of the organization.

  1. Sanity Testing

Sanity Testing is done to determine if a new software version is performing well enough to accept it for a major testing effort or not. If an application is crashing for the initial use then the system is not stable enough for further testing. Hence a build or an application is assigned to fix it.

  1. Security Testing

It is a type of testing performed by a special team of testers. A system can be penetrated by any hacking way.

Security Testing is done to check how the software or application or website is secure from internal and external threats. This testing includes how much software is secure from the malicious program, viruses and how secure and strong the authorization and authentication processes are.

It also checks how software behaves for any hackers attack and malicious programs and how software is maintained for data security after such a hacker attack.

  1. Smoke Testing

Whenever a new build is provided by the development team then the software testing team validates the build and ensures that no major issue exists. The testing team ensures that build is stable and a detailed level of testing is carried out further. Smoke Testing checks that no show stopper defect exists in the build which will prevent the testing team to test the application in detail.

If testers find that the major critical functionality is broken down at the initial stage itself then testing team can reject the build and inform accordingly to the development team. Smoke Testing is carried out to a detailed level of any functional or regression testing.

  1. Static Testing

Static Testing is a type of testing which is executed without any code. The execution is performed on the documentation during the testing phase. It involves reviews, walkthrough, and inspection of the deliverables of the project. Static testing does not execute the code instead of the code syntax, naming conventions are checked.

The static testing is also applicable for test cases, test plan, design document. It is necessary to perform static testing by the testing team as the defects identified during this type of testing are cost-effective from the project perspective.

  1. Stress Testing

This testing is done when a system is stressed beyond its specifications in order to check how and when it fails. This is performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to the system or database load.

  1. System Testing

Under System Testing technique, the entire system is tested as per the requirements. It is a Black-box type testing that is based on overall requirement specifications and covers all the combined parts of a system.

  1. Unit Testing

Testing an individual software component or module is termed as Unit Testing. It is typically done by the programmer and not by testers, as it requires a detailed knowledge of the internal program design and code. It may also require developing test driver modules or test harnesses.

  1. Usability Testing

Under Usability Testing, User-friendliness check is done. Application flow is tested to know if a new user can understand the application easily or not, Proper help documented if a user gets stuck at any point. Basically, system navigation is checked in this testing.

  1. Vulnerability Testing

The testing which involves identifying of weakness in the software, hardware and the network is known as Vulnerability Testing. Malicious programs, the hacker can take control of the system, if it is vulnerable to such kind of attacks, viruses, and worms.

So it is necessary to check if those systems undergo Vulnerability Testing before production. It may identify critical defects, flaws in the security.

  1. Volume Testing

Volume testing is a type of non-functional testing performed by the performance testing team.

The software or application undergoes a huge amount of data and Volume Testing checks the system behavior and response time of the application when the system came across such a high volume of data. This high volume of data may impact the system’s performance and speed of the processing time.

  1. White Box Testing

White Box testing is based on the knowledge about the internal logic of an application’s code.

It is also known as Glass box Testing. Internal software and code working should be known for performing this type of testing. Under this tests are based on the coverage of code statements, branches, paths, conditions etc.

Artificial Intelligence Capabilities

AI is one of the most popular talked about technologies today. For business, this technology introduces capabilities that innovative business and technology leadership can utilise to introduce new dimensions and abilities within service and product design and delivery.

Unfortunately, a lot of the real business value is locked up behind the terminology hype, inflated expectations and insecure warnings of machine control.

It is impossible to get the value from something that is not understood. So lets cut through the hype and focus to understand AI’s objectives and the key capabilities that this exciting technology enables.

There are many definitions of AI as discussed in the blog post “What is Artificial Intelligence: Definitions“.

Keeping it simple: “AI is using computers to do things that normally would have required human intelligence.” With this definition in mind, there are basically three things that AI is aiming to achieve.

3 AI Objectives

  • Capturing Information
  • Determine what is happening
  • Understand why it is happening

Lets use an example to demonstrate this…

As humans we are constantly gathering data through our senses which is converted by our brain into information which is interpreted for understanding and potential action. You can for example identify an object through site, turn it into information and identify the object instantly as, for example, a lion. In conjunction, additional data associated with the object at the present time, for example the lion is running after a person yelling for help, enables us to identify danger and to take immediate action…

For a machine, this process is very complex and requires large amounts of data, programming/training and processing power. Today, technology is so advanced that small computers like smart phones can capture a photo, identify a face and link it to a name. This is achieved not just through the power the smart phone but through the capabilities of AI, made available through services like facebook supported by an IT platform including, a fast internet connection, cloud computing power and storage.

To determine what is happening the machine might use Natural Language Understanding (NLU) to extract the words from a sound file and try to determine meaning or intent, hence working out that the person is running away from a lion and shouting for you to run away as well.

Why the lion is chasing and why the person is running away, is not known by the machine. Although the machine can capture information and determine what is happening, it does not understand why it is happening within full context – it is merely processing data. This reasoning ability, to bring understanding to a situation, is something that the human brain does very well.

Dispite all the technological advancements, can machines today only achieve the first two of the thee AI objectives. With this in mind, let’s explore the eight AI capabilities relevant and ready for use, today.

8 AI Capabilities

AI-8Capabilities

  • Capturing Information
    • 1. Image Recognition
    • 2. Speech Recognition
    • 3. Data Search
    • 4. Data Patterns
  • Determine what is happening
    • 5. Language Understanding
    • 6. Thought/Decision Process
    • 7. Prediction
  • Understand why it is happening
    • 8. Understanding

1. Image Recognition

This is the capability for a machine to identify/recognise an image. This is based on Machine Learning and requires millions of images to train the machine requiring lots of storage and fast processing power.

2. Speech Recognition

The machine takes a sound file and encodes it into text.

3. Search

The machine identifies words or sentences which are matched with relevant content within a large about of data. Once these word matches are found it can trigger further AI capabilities.

4. Patterns

Machines can process and spot patterns in large amounts of data which can be combinations of sound, image or text. This surpasses the capability of humans, literally seeing the woods from the trees.

5. Language Understanding

The AI capability to understand human language is called Natural Language Understanding or NLU.

6. Thought/Decision Processing

Knowledge Maps connects concepts (i.e. person, vehicle) with instances (i.e. John, BMW) and relationships (i.e. favourite vehicle). Varying different relationships by weight and/or probabilities of likelihood cn fine tune the system to make recommendations when interacted with. Knowledge Maps are not decision trees as the entry point of interaction can be at any point within the knowledge map as long as a clear goal has been defined (i.e. What is John’s favourite vehicle?)

7. Prediction

Predictive analytics is not a new concept and the AI prediction capability basically takes a view on historic data patterns and matches it with a new piece of data to predict a similar outcome based on the past.

8. Understanding

Falling under the third objective of AI – Understand what is happening, this capability is not currently commercially available.

To Conclude

In understanding the capabilities of AI you can now look beyond the hype, be realistic and identify which AI capabilities are right to enhance your business.

In a future blog post, we’ll examine some real live examples of how these AI capabilities can be used to bring business value.

Release Management as a Competitive Advantage

“Delivery focussed”, “Getting the job done”, “Results driven”, “The proof is in the pudding” – we are all familiar with these phrases and in Information Technology it means getting the solutions into operations through effective Release Management, quickly.

In the increasingly competitive market, where digital is enabling rapid change, time to market is king. Translated into IT terms – you must get your solution into production before the competition does through an effective ability to do frequent releases. Doing frequent releases benefit teams as features can be validated earlier and bugs detected easily. The smaller iteration cycles provide flexibility, making adjustments to unforeseen scope changes easier and reducing the overall risk of change.

IT teams with well governed agile and robust release management practices have a significant competitive advantage. This advantage materialises through self-managed teams consisting of highly skilled technologist who collaborative work according to a team defined release management process, that continuously improves through constructive feedback loops and corrective actions.

The process of implementing such agile practices, can be challenging as building software becomes increasingly more complex due to factors such as technical debt, increasing legacy code, resource movements, globally distributed development teams, and the increasing number of platforms to be supported.

To realise this advantage, an organisation must first optimise its release management process and identify the most appropriate platform and release management tools.

Here are three well known trends that every technology team can use to optimise delivery:

1. Agile delivery practises – with automation at the core 

So you have adopted an agile delivery methodology and your re having daily scrum meetings – but you know that is not enough. Sprint planning as well as review and retrospection are all essential elements for a successful release, but in order to gain substantial and meaningful deliverables within the time constraints of agile iterations, you need to invest in automation.

An automation ability bring measurable benefits to the delivery team as it reduces the pressure on people in minimising human error and increasing overall productivity and delivery quality into your environment that shows in key metrics like team velocity. Another benefit automation introduces is consistent and repeatable process, enabling easily scalable teams while reducing errors and release times. Agile delivery practices (see “Executive Summary of 4 commonly used Agile Methodologies“) all embrace and promote the use of automation across the delivery lifecycle, especially in build, test and deployment automation. Proper automation support delivery teams in reducing overhead of time consuming repetitive tasks in configuration and testing so them can focus on the core of customer centric product/service development with quality build in. Also readHow to Innovate to stay Relevant“; “Agile Software Development – What Business Executives need to know” for further insight in Agile methodologies…

Example:

Code Repository (version Control) –> Automated Integration –> Automated Deployment of changes to Test Environments –> Platform & Environment Changes automated build into Testbed –> Automated Build Acceptance Tests –> Automated Release

When a software developer commits changes to the version control, these changes automatically get integrated with the rest of the modules. Integrated assembles are then automatically deployed to a test environment. If there are changes to the platform or the environment, the environment gets automatically built and deployed on test bed. Next, build acceptance tests are automatically kicked off, which would include capacity tests, performance, and reliability tests. Developers and/or leads are notified only when something fails. Therefore, the focus remains on core development and not just on other overhead activities. Of course, there will be some manual check points that the release management team will have to pass in order to trigger next the phase, but each activity within this deployment pipeline can be more or less automated. As your software passes all quality checkpoints, product version releases are automatically pushed to the release repository from which new versions can be pulled automatically by systems or downloaded by customers.

Technologies:

  • Build Automation:  Ant, Maven, Make
  • Continuous Integration: Jenkins, Cruise Control, Bamboo
  • Test Automation: Silk Test, EggPlant, Test Complete, Coded UI
  • Continuous Deployment: Jenkins, Bamboo, Prism

2. Cloud platforms and Virtualisation as development and test environments

Today, most software products are built to support multiple platforms, be it operating systems, application servers, databases, or Internet browsers. Software development teams need to test their products in all of these environments in-house prior to releasing them to the market.

This presents the challenge of creating all of these environments as well as maintaining them. These challenges increase in complexity as development and test teams become more geographically distributed. In these circumstances, the use of cloud platforms and virtualisation helps, especially as these platforms have recently been widely adopted in all industries.

Automation on cloud and virtualised platforms enables delivery teams to rapidly spin up/down environments optimising infrastructure utilisation aligned with demand while, similar to maintaining code and configuration version history for our products, also maintain the version history of all supported platforms. Automated cloud platforms and virtualisation introduces flexibility that optimises infrastructure utilisation and the delivery footprint as demand changes – bringing savings across the overall delivery life-cycle.

Example:

When a build and release engineer changes configurations for the target platform – the operating system, database, or application server settings – the whole platform can be built and a snapshot of it created and deployed to the relevant target platforms.

Virtualisation:The virtual machine (VM) is automatically provisioned from the snapshot of base operating system VM, appropriate configurations are deployed and the rest of the platform and application components are automatically deployed.

Cloud:Using a solution provider like Rackspace to deliver Infrastructure-as-a-Service (IaaS) and Platform as a Service (PaaS), new configurations can be introduced in a new Rackspace instance is produced, instantiated, and configured as a development and test environment. This is crucial for flexibility and productivity, as it takes minutes instead of weeks to adapt to configuration changes. With automation, the process becomes repeatable, quick, and streamlines communication across different teams within the Tech-hub.

3. Distributed version control systems

Distributed version control systems (DVCS), for example GIT, Perforce or Mercurial, introduces flexibility for teams to collaborate at the code level. The fundamental design principle behind DVCS is that each user keeps a self-contained repository with complete version history on one’s local computer. There is no need for a privileged master repository, although most teams designate one as a best practice. DVCS allow developers to work offline and commit changes locally.

As developers complete their changes for an assigned story or feature set, they push their changes to the central repository as a release candidate. DVCS offers a fundamentally new way to collaborate, as  developers can commit their changes frequently without disrupting the main codebase or trunk. This becomes useful when teams are exploring new ideas or experimenting as well as enabling rapid team scalability with reduced disruption.

DVCS is a powerful enabler for the team that utilise an agile-feature-based branching strategy. This encourages development teams to continue to work on their features (branches) as they get ready, having fully tested their changes locally, to load them into next release cycle. In this scenario, developers are able to work on and merge their feature branches to a local copy of the repository.After standard reviews and quality checks will the changes then be merged into the main repository.

To conclude

Adopting these three major trends in the delivery life-cycle enables a organisation to imbed proper release management as a strategic competitive advantage. Implementing these best practices will obviously require strategic planning and an investment of time in the early phases of your project or team maturity journey – this will reduce the organisational and change management efforts to get to market quicker.

Modular Operating Model for Strategy Agility

One of life’s real pleasures, is riding a motorcycle. The sense of freedom when it is just you, machine and the open road is something only sharing enthusiast would truly understand. Inspired, I recently completed a hobby project building the Lego Set 42063. The building blocks of this Technic model constructs the BMW R1200GS Adventure motorcycle, arguably the best allrounder, adapted to handle all road conditions. The same building blocks can also be used to build a futuristic flying scooter, or shall I call it a speedster in true Star Wars style… While building the model I was marvelled by the ingeniousness of the design and how the different components come together in a final product – fit for purpose today but easily adapted to be fit for future.

Lego-Technic-modular

This made me think about business agility – how can this modular approach be used within business. We know that SOA (Service Oriented Architecture) takes a modular approach in building adaptable software application and in the talk on “Structure Technology for Success – using SOA” I explained a modular approached to design a Service Orientated Organisation (SOO), to directly contribute to the business success.

Recently I’ve also written about how to construct a business Operating Models that delivers. Such an operating model aligns the business operations with the needs of it’s customers, while it provides the agility to continuously adapt to changes in this fast changing technological ecosystem we live in. An Operating Model that delivers, fit for purpose today but easy adaptable to be fit for the future, in other words – a Modular Operating Model.

As the environment for a company changes rapidly, static operating models lack the agility to respond. Successful companies are customer centric and embrace continuous innovation to enhance the ability of the organisation to re-design it’s operations. This requires an Operating Model that incorporates the agility to be responsive to changes in business strategy and customer needs. A modular operating model enables agility in business operations with a design that can respond to change by defining standard building blocks and how to dynamically combine them. Modular blocks (with the specific operational complexity contained) simplifies managing complexity. This reduces the time to produce a new operational outcome, irrespective of this being a new services, product or just an efficiency improvement within an existing value chain.  An example of applying modular thinking to a operational delivery methodology is covered in the blog post: “How to Innovate to stay Relevant”. In combining the core principles and benefits of three different delivery methodologies, Design Thinking, Lean Startup and Agile Scrum as modular building blocks, a delivery methodology are constructed that ensures rapid delivery of innovation into customer centric revenue channels while optimising the chances of success through continuous alignment with customer and market demand.

A modular operating model imbeds operational agility through the ability to use, re-use, plug and play different capabilities, processes and resources (building blocks) tech-TOMto easily produce new business outcomes without having to deal with the complexities which are already defined within the individual building blocks – just like a Lego set using the same set of standardised and pre-defined blocks to build completely different things. The focus is on re-using the blocks and not on the design of the blocks itself. Off course a lot of thinking has gone into the design of the different building blocks, but through re-using the same block designs, the model design time is focussed on a new/different outcome and not on a component of an outcome.

Designing modular capabilities, processes and resources that are used to design operating models have benefits not just in efficiencies and savings through economies of scale, but also in the reduction of time to market. These benefits are easier to accomplish in larger multi-divisional organisation with multiple operating models or organisations with complex operating models bringing together multiple organisations and different locations, where the re-use of modular operating model blocks bring demonstrable efficiencies.

 

WIP…

An Operating Model that Delivers

Every organisation that I have worked with around the world, whether it is in London, Johannesburg, Sydney, Singapore, Dallas, Kuala Lumpir, Las-Vegas, Nairobi or New York, there was always reference to a Target Operating Model (TOM) when business leaders spoke about business strategy and performance. Yes, the TOM – the ever eluding state of euphoria when all business operations work together in harmony to deliver the business vision…sometime in the near foreseen future.

Most business transformation programmes are focussed to deliver a target operating model – transforming the business by introducing a new way of working that better aligns the business offering with it’s customer’s changing expectation. Millions in business change budgets have been invested in TOM design projects and 1000s of people have worked in these TOM projects of which some have delivered against the promise.

With the TOM as the defined deliverable, the targeted operational state and the outcome of the business transformation programme, it is very important that the designed TOM are actually fit for purpose. The TOM also has to lend itself to be easily adjustable in order to contribute to the agility of an organisation. The way the business is operating must be able to adapt to an ever changing technology driven world and the associated workforce. The quick evolving digital world is probably the main catalyst for transformation in organisations today – read “The Digital Transformation Necessity” for further insights…

Operating Model (OM)

The Operating Model uses key inputs from the Business Model and Strategy.

The Business Model focuses on the business’ customers, the associated product and service offerings – how the organisation creates value for it’s cliental – and the commercial proposition. Within the business model the business’s revenue streams and how those are contributing to the business value chain to generate profits, are decried. In other words, the Business Model envisages the What within the organisation.

Within the Business Strategy the plan to achieve specific goals are defined, as well as the metrics required to measure how successfully these are achieved. The business goals are achieved through the daily actions as defined within the Operating Model.

Typically an Operating Model takes the What from the Business Model in conjunction with the business strategy, and defines the Why, What, How, Who and With. It is the way in which the business model and strategy is executed by conducting the day to day business operations. Execution is key as no business can be successful by just having a business strategy, the execution of the operating model delivering the business strategy is the operative ingredient of success.

In order to document and describe how an organisation functions, the Operating model usually includes business capabilities and associated processes, the products and/or services being delivered, the roles and responsibilities of people within the business and how these are organised and governed within the business, the metrics defined to manage, monitor and control the performance of the organisation and then the underpinning Technology, Information Systems and Tools the business uses in delivering it’s services and/or products.

Analogy: A good analogy to describe the Operating Model is to compare it to the engine of F1 car. In 2016 the Mercedes Silver Arrow (the fastest car, driven by Lewis Hamilton (arguably the fastest driver), did not win because of engine and reliability problems. Instead the World Championship was won by Nico Rosberg, who had a better performing engine over the whole season. Nico benefited from a better operating model – he had the processes, data, systems and the people (including himself) to win. The mechanical failures that Lewis suffered, mostly not through fault of his own, were a result of failures somewhere within his operating model.

Target Operating Model (TOM)

The Target Operating Model (TOM) is a future state version of the Operating Model. To derive the TOM, the existing Operating Model is compared with the desired future state keeping the key aspects of an operating model in mind: Why, What, How, Where, Who and With. The TOM also cover two additional key aspects: the When & Where defined within the transformation programme to evolve from current to future states.

The difference between the “as is” Operating Model and the “to be” Target Operating Model, indicates the gap that the business must bridge in the execution of its Transformation Model/Strategy – the When and Where. To achieve the Target Operating Model usually require large transformation effort, executed as change & transformation programmes and projects.

ToBe (TOM) – AsIs (OM) = Transformation Model (TM)

Why >> Business Vision & Mission

What >> Business Model (Revenue channels through Products and Services – the Value Chain)

How >> Business Values & Processes & Metrics

Who >> Roles & Responsibilities (RACI)

With >> Tools, Technology and Information

Where & When >> Transformation Model/Strategy

Defining the TOM

A methodology to compile the Target Operating Model (TOM) is summarised by the three steps shown in the diagram below:

TOM Methodology
Inputs to the methodology:

  • Business Model
  • Business Strategy
  • Current Operating Model
  • Formaly documented information, processes, resource models, strategies, statistics, metrics…
  • Information gathered through interviews, meetings, workshops…

Methodology produces TOM Outputs:

  • Business capabilities and associated processes
  • Clearly defined and monetised catalogue of the products and/or services being delivered
  • Organisation structure indicating roles and responsibilities of people within the business and how these are organised and governed
  • Metrics specifically defined to manage, monitor and control the performance of the organisation
  • Underpinning Technology, Information Systems and Tools the business uses in delivering it’s services and/or products

The outputs from this methodology covers each key aspect needed for a TOM that will deliver on the desired business outcomes. Understanding these desired outcomes and the associated goals and milestones to achieve them, is hence a fundamental prerequisite in compiling a TOM.

To Conclude

An achievable Target Operating Model, that delivers, is dependant on the execution of an overall business transformation strategy that aligns the business’ vision, mission and strategy with a future desired state in which the business should function.

Part of the TOM is this Business Transformation Model that outlines the transformation programme plan, which functionally syncs the current with the future operating states. It also outlines the execution phases required to deliver the desired outcomes, in the right place at the right time, while having the agility to continuously adapt to changes.

Only if an organisation has a strategically aligned and agile Target Operating Model in place that can achieve this, is the business in a position to successfully navigate its journey to the benefits and value growth it desires.

renierbotha Ltd has a demonstrable track record of compiling and delivering visionary Target Operating Models.

If you know that your business has to transform to stay relevant – Get in touch!

 

Originally written by Renier Botha in 2016 when, as Managing Director, he was pivotal in delivering the TOM for Systems Powering Healthcare Ltd.