Essential AI Skills for Professionals in Every Sector

The demand for AI skills is no longer confined to the tech industry. From finance to healthcare, retail to manufacturing, artificial intelligence is reshaping how businesses operate and compete. As AI becomes increasingly integrated into various aspects of business processes, having AI skills is becoming a core requirement for professionals across all sectors.

Why AI Skills Are Essential

  • Automation and Efficiency: AI technologies are driving automation in routine and complex tasks, improving efficiency and accuracy. Employees who understand how to leverage AI tools can significantly enhance productivity, streamline operations, and reduce errors.
  • Data-Driven Decision Making: Businesses today collect massive amounts of data. AI helps in analysing this data to derive actionable insights. Professionals equipped with AI skills can interpret these insights to make informed decisions that drive business growth and innovation.
  • Competitive Edge: Incorporating AI into business strategies provides a competitive advantage. Companies that can develop and implement AI solutions can differentiate themselves in the market. Employees with AI expertise are therefore crucial for maintaining and advancing this edge.

Key Technical AI Skills in Demand

  1. Machine Learning (ML): Understanding machine learning algorithms and their applications is vital. Professionals should be able to develop, train, and deploy ML models to solve business problems.
  2. Data Science: Skills in data collection, cleaning, and analysis are fundamental. Knowledge of programming languages like Python and R, along with experience in data visualization tools, is highly sought after.
  3. Natural Language Processing (NLP): NLP skills are essential for working with text data and developing applications like chatbots, sentiment analysis, and language translation.
  4. AI Ethics and Governance: As AI usage grows, so does the importance of ethical considerations. Professionals need to be aware of the ethical implications of AI, including issues of bias, transparency, and accountability.
  5. AI Integration: Understanding how to integrate AI solutions into existing systems and workflows is crucial. This includes skills in APIs, cloud computing, and software development.

How to Acquire AI Skills

  • Online Courses and Certifications: There are numerous online platforms offering courses in AI and ML, such as Coursera, edX, Udemy and Udacity. Earning certifications from these platforms can bolster your resume and provide foundational knowledge.
  • Hands-On Projects: Practical experience is invaluable. Working on real-world projects, participating in hackathons, or contributing to open-source AI projects can provide practical insights and experience.
  • Advanced Degrees: Pursuing a degree in data science, computer science, or related fields can provide a deeper understanding of AI technologies and methodologies.
  • Company Training Programs: Many organisations offer in-house training programs to upskill their employees in AI. Taking advantage of these opportunities can help you stay current with industry trends and technologies.

AI Skills for Business Employees: Enhancing Efficiency and Boosting Productivity

As AI permeates every aspect of business operations, employees who are not directly involved in technical roles also need to acquire certain AI skills. These skills empower them to utilise AI tools effectively in their daily tasks, thereby enhancing efficiency and boosting productivity. Here are some key AI skills that are particularly beneficial for business employees:

Essential AI Skills for Business Employees

  1. Understanding AI Tools and Platforms: Business employees should become familiar with various AI tools and platforms that can automate routine tasks, such as customer relationship management (CRM) systems with AI capabilities, project management tools, and virtual assistants. Knowledge of how to use these tools effectively can streamline workflows and reduce the time spent on repetitive tasks.
  2. Data Literacy: Data literacy involves understanding how to interpret and use data effectively. Employees should be able to work with data, understand its sources, assess its quality, and derive insights using AI-powered analytics tools. This skill is crucial for making data-driven decisions and identifying trends and patterns that can inform business strategies.
  3. Basic Programming Knowledge: While not every business employee needs to be a coding expert, having a basic understanding of programming languages like Python or R can be beneficial. This knowledge enables employees to perform simple data manipulations, automate tasks, and customize AI tools to better fit their specific needs.
  4. Data Visualization: Being able to visualize data effectively helps in presenting complex information in an easily understandable format. Familiarity with AI-powered data visualization tools, such as Tableau or Power BI, can help employees create impactful reports and presentations that drive better decision-making.
  5. Process Automation: Robotic Process Automation (RPA) tools allow employees to automate repetitive and mundane tasks, freeing up time for more strategic activities. Understanding how to implement and manage RPA solutions can lead to significant productivity gains.
  6. Natural Language Processing (NLP) for Communication: NLP tools can enhance communication and customer service through applications like chatbots and automated response systems. Employees should understand how to use these tools to improve customer interactions and support services efficiently.
  7. AI-Enhanced Marketing Tools: In marketing, AI tools can optimize campaigns, analyze consumer behavior, and personalize customer experiences. Employees in marketing roles should be adept at using these tools to increase the effectiveness of their campaigns and achieve better ROI.
  8. Ethical AI Usage: Understanding the ethical implications of AI is important for ensuring that AI applications are used responsibly. Business employees should be aware of issues like data privacy, algorithmic bias, and transparency to ensure their use of AI aligns with ethical standards and regulations.

Practical Applications in Daily Work

  • Customer Service: AI chatbots and virtual assistants can handle routine customer inquiries, providing quick and efficient service while freeing up human agents to tackle more complex issues.
  • Sales Forecasting: AI-powered analytics tools can predict sales trends and customer behaviors, helping sales teams to make more accurate forecasts and better allocate resources.
  • Marketing Automation: AI can automate email campaigns, social media posts, and content recommendations, ensuring timely and personalized communication with customers.
  • Financial Analysis: AI tools can analyze financial data to detect anomalies, forecast trends, and assist in budgeting and financial planning, enabling more informed financial decisions.
  • Human Resources: AI can streamline recruitment processes by screening resumes, scheduling interviews, and even conducting preliminary interviews through AI-powered chatbots.
  • Supply Chain Management: AI can optimize supply chain operations by predicting demand, managing inventory, and identifying potential disruptions before they impact the business.

Conclusion

As AI continues to transform industries, having AI skills is becoming essential for professionals across all sectors. The ability to understand, develop, and implement AI solutions is no longer a niche skill set but a core requirement. Investing in AI education and gaining hands-on experience will not only enhance your career prospects but also contribute to the growth and innovation of your organization. In a world where AI is increasingly prevalent, those who embrace and master these skills will lead the charge in the future of work.

Incorporating AI skills into the daily work of business employees not only enhances efficiency but also boosts overall productivity. By understanding and leveraging AI tools and platforms, business employees can automate mundane tasks, make data-driven decisions, and contribute more strategically to their organizations. As AI continues to evolve, staying abreast of these skills will be crucial for maintaining competitiveness and driving business success.

Putting Out All Buckets When It Rains: Preparing for Future Droughts

In life, opportunities and challenges come in waves. Sometimes, we find ourselves amidst a downpour of chances, each one brimming with potential. Other times, we face droughts – periods where opportunities seem scarce and progress is hard to come by. I have always lived by the metaphor of “putting out all buckets when it rains – you never know when the next drought arrives” which perfectly encapsulates the need to seize opportunities and prepare for future uncertainties. This concept is crucial not only for personal growth but also for professional success and financial stability.

The Rain: Recognising Opportunities

Rain symbolises abundance and opportunities. It’s that promotion at work, the new client for your business, or the chance to learn a new skill. Recognising these moments is the first step. Often, we become complacent or assume that such opportunities will always be there. But like rain, they can be unpredictable and sporadic.

Key Actions:

  • Stay Alert: Always be on the lookout for opportunities, even if they seem small or insignificant.
  • Be Prepared: Equip yourself with the necessary skills and knowledge to take advantage of these opportunities when they arise.
  • Act Swiftly: Don’t procrastinate. When an opportunity presents itself, act quickly and decisively.

The Buckets: Maximising Potential

Putting out all buckets means making the most of every opportunity. Each bucket represents a different aspect of your life or work—financial savings, career advancement, personal development, or relationships. The more buckets you put out, the more rain you can collect.

Key Actions:

  • Diversify: Just as you wouldn’t rely on one bucket, don’t rely on a single source of opportunity. Diversify your efforts across various areas.
  • Invest Wisely: Put your time, energy, and resources into actions that yield the highest returns.
  • Build Resilience: Ensure that your buckets are sturdy. This means building strong foundations in your skills, relationships, and financial health.

The Drought: Preparing for Scarcity

Droughts are inevitable. These are the tough times when opportunities are few and far between. However, the rain you collected earlier can sustain you through these dry spells. Preparing for droughts means being proactive and planning for the future, even when everything seems to be going well.

Key Actions:

  • Save for a Rainy Day: Financially, this means building an emergency fund. Professionally, it could mean keeping your skills sharp and your network active.
  • Stay Adaptable: Be ready to pivot and adapt to new circumstances. Flexibility can be a crucial asset during tough times.
  • Reflect and Learn: Use the downtime to reflect on past actions and learn from them. This can help you make better decisions when opportunities arise again.

Balancing Rain and Drought: A Holistic Approach

Balancing the metaphorical rain and drought requires a holistic approach. It’s about understanding that life is cyclical and being prepared for both the highs and lows. Here’s how to maintain this balance:

Key Actions:

  • Mindset: Cultivate a mindset of abundance and preparedness. Understand that both rain and drought are temporary and cyclical.
  • Continuous Improvement: Never stop improving yourself. Whether it’s learning new skills, improving your health, or building better relationships, continuous improvement ensures that you’re always ready to seize opportunities.
  • Community: Surround yourself with a supportive community. Whether it’s friends, family, or professional networks, having a support system can help you weather any storm.

Business Context: Leveraging Opportunities and Mitigating Risks

In the business world, the metaphor of “putting out all buckets when it rains as you never know when the next drought arrives” is particularly relevant. Companies often experience cycles of growth and stagnation, influenced by market trends, economic conditions, and industry disruptions. Understanding how to maximise opportunities during prosperous times and preparing for inevitable challenges can mean the difference between long-term success and failure.

Recognising Business Opportunities

In a business context, rain symbolises favourable market conditions, emerging trends, and new opportunities for growth. Whether it’s a surge in demand for your products, a successful marketing campaign, or a favourable economic environment, recognising these moments and capitalising on them is crucial.

Key Actions:

  • Market Analysis: Regularly analyse market trends and consumer behaviour to identify new opportunities early.
  • Innovation: Invest in research and development to stay ahead of the competition and meet emerging market needs.
  • Agility: Foster an agile business model that can quickly adapt to new opportunities and changing market conditions.

Maximising Business Potential

Putting out all buckets in a business context means deploying resources strategically to maximise returns. This involves diversifying revenue streams, optimising operations, and investing in growth areas.

Key Actions:

  • Diversify Revenue Streams: Don’t rely on a single product or service. Explore new markets and expand your product line to mitigate risk.
  • Optimise Operations: Streamline processes to improve efficiency and reduce costs. This can free up resources to invest in new opportunities.
  • Build Strong Partnerships: Form strategic alliances and partnerships that can open new avenues for growth and innovation.

Preparing for Business Droughts

Economic downturns, market disruptions, and other challenges are inevitable in business. Preparing for these droughts ensures your company can survive and even thrive during tough times.

Key Actions:

  • Financial Reserves: Maintain a healthy cash reserve to navigate through economic downturns without compromising your operations.
  • Risk Management: Implement comprehensive risk management strategies to identify, assess, and mitigate potential risks.
  • Continuous Improvement: Invest in employee training and development to keep your workforce adaptable and resilient.

Balancing Growth and Stability

Balancing periods of growth and stability requires a strategic approach. It involves taking calculated risks while safeguarding the business against potential downturns.

Key Actions:

  • Strategic Planning: Develop long-term strategic plans that account for both growth and potential risks.
  • Scenario Planning: Use scenario planning to prepare for various market conditions and ensure the business can adapt to different situations.
  • Sustainable Practices: Incorporate sustainability into your business model to ensure long-term viability and resilience against market fluctuations.

Case Study: Successful Implementation

Consider a tech company that recognised the rising trend of remote work early on. During the “rain,” they invested heavily in developing robust telecommuting software, diversified their product offerings, and formed strategic partnerships with major corporations. They also maintained substantial financial reserves and implemented strong risk management practices. When the COVID-19 pandemic hit, and remote work became the norm, they were well-prepared. Their prior investments paid off, and they not only weathered the storm but also emerged as a market leader.

Conclusion

In business, as in life, opportunities and challenges are cyclical. By recognising opportunities, maximising potential, and preparing for downturns, companies can navigate both the prosperous and challenging times effectively. The metaphor of “putting out all buckets when it rains” underscores the importance of being proactive, strategic, and resilient. By doing so, businesses can ensure sustained growth and long-term success, regardless of market conditions.

The Eternal Dilemma: Expert or Eternal Student in a Rapidly Evolving Tech Landscape

The lines between being an expert and remaining a perpetual student are increasingly blurred within the ever-accelerating world of technology evolution. As we navigate through continuous waves of innovation, the role of a technology professional is perpetually redefined. This leads to a fundamental question: in a field that evolves daily, can one ever truly be an expert, or is tech destined to make eternal students of us all?

The Pace of Technological Change

The first point of consideration is the unprecedented rate of technological change. Innovations such as artificial intelligence, blockchain, and quantum computing are not just new tools in the toolbox – they are reshaping the toolbox itself. Every breakthrough brings layers of complexity and new knowledge that must be mastered, which can be a daunting task for anyone striving to be an expert.

Defining Expertise in Technology

Traditionally, an expert is someone who possesses comprehensive and authoritative knowledge in a particular area. However, in technology, such expertise is often transient. What you know today might be obsolete tomorrow, or at least need significant updating. This fluidity prompts a reassessment of what it means to be an expert. Is it about having a deep understanding of current technologies, or is it the ability to learn and adapt swiftly to new developments?

The Specialist vs. Generalist Conundrum

In tech, specialists dive deep into specific areas like cybersecurity or cloud computing. They possess a depth of knowledge that can be critical for addressing intricate challenges in those fields. On the other hand, generalists have a broader understanding of multiple technologies. They can integrate diverse systems and solutions, which is increasingly valuable in a world where technologies often converge.

The dilemma arises in maintaining a balance. Specialists risk their expertise becoming less relevant as new technologies emerge, while generalists may lack the deep knowledge required to solve specialised problems.

Technology Leadership: Steering Through Constant Change

Technology leadership itself is a form of expertise. To lead in the tech world means more than managing people and projects; it involves steering the ship through the turbulent waters of technological innovation. Tech leaders must not only anticipate and adapt to technological change but also inspire their teams to embrace these changes enthusiastically.

A technology leader needs a robust set of skills:

  • Visionary Thinking: The ability to foresee future tech trends and how they can be harnessed for the organisation’s benefit.
  • Agility: Being able to pivot strategies quickly in response to new information or technologies.
  • Technical Proficiency: While not needing to be the deepest expert in every new tech, a leader should have a solid understanding of the technologies that are driving their business and industry.
  • Empathy and Communication: Leading through change requires convincing entire teams to come on board with new ways of thinking, which can only be done effectively with strong interpersonal skills and clear communication.
  • Resilience: Tech landscapes can change with daunting speed, and leaders need the resilience to endure setbacks and keep their teams motivated.

Perception of Expertise

Expertise in technology is also a matter of perception. Among peers, being seen as an expert often requires not just knowledge, but the ability to foresee industry trends, adapt quickly, and innovate. From an organisational perspective, an expert is often someone who can solve problems effectively, regardless of whether their solutions are grounded in deep speciality knowledge or a broader understanding of technology.

The Role of Lifelong Learning

The most consistent answer to navigating the expert-generalist spectrum is lifelong learning. In technology, learning is not a finite journey but a continuous process. The most successful professionals embrace the mindset of being both an expert and a student. They accumulate specialised knowledge and experience while staying open to new ideas and approaches.

Conclusion: Embracing a Dual Identity

Being a technology expert today means embracing the dual identity of expert and eternal student. It involves both deep specialisation and a readiness to broaden one’s horizons. In this ever-evolving landscape, perhaps the true experts are those who can adeptly learn, unlearn, and relearn. Whether one is perceived as an expert might depend on their ability to adapt and continue learning, more than the static knowledge they currently hold.

As we continue to witness rapid technological advancements, the value lies not just in expertise or general knowledge, but in the agility to navigate between them, ensuring relevance and leadership in the tech world.

In the worlds of Satya Nadella, CEO of Microsoft: “Don’t be a know-it-all, be a learn-it-all.”

Navigating the Complex Terrain of Data Governance and Global Privacy Regulations

In every business today, data has become one of the most valuable assets for organisations across all industries. However, managing this data responsibly and effectively presents a myriad of challenges, especially given the complex landscape of global data privacy laws. Here, we delve into the crucial aspects of data governance and how various international data protection regulations influence organisational strategies.

Essentials of Data Governance

Data governance encompasses the overall management of the availability, usability, integrity, and security of the data employed in an enterprise. A robust data governance programme focuses on several key areas:

  • Data Quality: Ensuring the accuracy, completeness, consistency, and reliability of data throughout its lifecycle. This involves setting standards and procedures for data entry, maintenance, and removal.
  • Data Security: Protecting data from unauthorised access and breaches. This includes implementing robust security measures such as encryption, access controls, and regular audits.
  • Compliance: Adhering to relevant laws and regulations that govern data protection and privacy, such as GDPR, HIPAA, or CCPA. This involves keeping up to date with legal requirements and implementing policies and procedures to ensure compliance.
  • Data Accessibility: Making data available to stakeholders in an organised manner that respects security and privacy constraints. This includes defining who can access data, under what conditions, and ensuring that the data can be easily and efficiently retrieved.
  • Data Lifecycle Management: Managing the flow of an organisation’s data from creation and initial storage to the time when it becomes obsolete and is deleted. This includes policies on data retention, archiving, and disposal.
  • Data Architecture and Integration: Structuring data architecture so that it supports an organisation’s information needs. This often involves integrating data from multiple sources and ensuring that it is stored in formats that are suitable for analysis and decision-making.
  • Master Data Management: The process of managing, centralising, organising, categorising, localising, synchronising, and enriching master data according to the business rules of a company or enterprise.
  • Metadata Management: Keeping a catalogue of metadata to help manage data assets by making it easier to locate and understand data stored in various systems throughout the organisation.
  • Change Management: Managing changes to the data environment in a controlled manner to prevent disruptions to the business and to maintain data integrity and accuracy.
  • Data Literacy: Promoting data literacy among employees to enhance their understanding of data principles and practices, which can lead to better decision-making throughout the organisation.

By focusing on these areas, organisations can maximise the value of their data, reduce risks, and ensure that data management practices support their business objectives and regulatory requirements.

Understanding Global Data Privacy Laws

As data flows seamlessly across borders, understanding and complying with various data privacy laws become paramount. Here’s a snapshot of some of the significant data privacy regulations around the globe:

  • General Data Protection Regulation (GDPR): The cornerstone of data protection in the European Union, GDPR sets stringent guidelines for data handling and grants significant rights to individuals over their personal data.
  • California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA): These laws provide broad privacy rights and are among the most stringent in the United States.
  • Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada and Lei Geral de Proteção de Dados (LGPD) in Brazil reflect the growing trend of adopting GDPR-like standards.
  • UK General Data Protection Regulation (UK GDPR), post-Brexit, which continues to protect data in alignment with the EU’s standards.
  • Personal Information Protection Law (PIPL) in China, which indicates a significant step towards stringent data protection norms akin to GDPR.

These regulations underscore the need for robust data governance frameworks that not only comply with legal standards but also protect organisations from financial and reputational harm.

The USA and other countries have various regulations that address data privacy, though they often differ in scope and approach from the European and UK’s GDPR. Here’s an overview of some of these regulations:

United States

The USA does not have a single, comprehensive federal law governing data privacy akin to the GDPR. Instead, it has a patchwork of federal and state laws that address different aspects of privacy:

  • Health Insurance Portability and Accountability Act (HIPAA): Protects medical information.
  • Children’s Online Privacy Protection Act (COPPA): Governs the collection of personal information from children under the age of 13.
  • California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA): These state laws resemble the GDPR more closely than other US laws, providing broad privacy rights concerning personal information.
  • Virginia Consumer Data Protection Act (VCDPA) and Colorado Privacy Act (CPA): Similar to the CCPA, these state laws offer consumers certain rights over their personal data.
European Union
  • General Data Protection Regulation (GDPR): This is the primary law regulating how companies protect EU citizens’ personal data. GDPR has set a benchmark globally for data protection and privacy laws.
United Kingdom
  • UK General Data Protection Regulation (UK GDPR): Post-Brexit, the UK has retained the EU GDPR in domestic law but has made some technical changes. It operates alongside the Data Protection Act 2018.
Canada
  • Personal Information Protection and Electronic Documents Act (PIPEDA): Governs how private sector organisations collect, use, and disclose personal information in the course of commercial business.
Australia
  • Privacy Act 1988 (including the Australian Privacy Principles): Governs the handling of personal information by most federal government agencies and some private sector organisations.
Brazil
  • Lei Geral de Proteção de Dados (LGPD): Brazil’s LGPD shares many similarities with the GDPR and is designed to unify 40 different statutes that previously regulated personal data in Brazil.
Japan
  • Act on the Protection of Personal Information (APPI): Japan’s APPI was amended to strengthen data protection standards and align more closely with international standards, including the GDPR.
China
  • Personal Information Protection Law (PIPL): Implemented in 2021, this law is part of China’s framework of laws aimed at regulating cyberspace and protecting personal data similarly to the GDPR.
India
  • Personal Data Protection Bill (PDPB): As of the latest updates, this bill is still in the process of being finalised and aims to provide a comprehensive data protection framework in India. This will become the Personal Data Protection Act (PDPA).
Sri Lanka
  • Sri Lanka welcomed the Personal Data Protection Act No. 09 of 2022 (the “Act”) in March 2022.
  • The PDPA aims to regulate the processing of personal data and protect the rights of data subjects. It will establish principles for data collection, processing, and storage, as well as define the roles of data controllers and processors.
  • During drafting, the committee considered international best practices, including the OECD Privacy Guidelines, APEC Privacy Framework, EU GDPR, and other data protection laws.

Each of these laws has its own unique set of requirements and protections, and businesses operating in these jurisdictions need to ensure they comply with the relevant legislation.

How data privacy legislation impacts data governance

Compliance with these regulations requires a comprehensive data governance framework that includes policies, procedures, roles, and responsibilities designed to ensure that data is managed in a way that respects individual privacy rights and complies with legal obligations. GDPR (General Data Protection Regulation) and other data privacy legislation play a critical role in shaping data governance strategies. Compliance with these regulations is essential for organisations, particularly those that handle personal data of individuals within the jurisdictions covered by these laws. Here’s how :

  • Data Protection by Design and by Default: GDPR and similar laws require organisations to integrate data protection into their processing activities and business practices, from the earliest design stages all the way through the lifecycle of the data. This means considering privacy in the initial design of systems and processes and ensuring that personal data is processed with the highest privacy settings by default.
  • Lawful Basis for Processing: Organisations must identify a lawful basis for processing personal data, such as consent, contractual necessity, legal obligations, vital interests, public interest, or legitimate interests. This requires careful analysis and documentation to ensure that the basis is appropriate and that privacy rights are respected.
  • Data Subject Rights: Data privacy laws typically grant individuals rights over their data, including the right to access, rectify, delete, or transfer their data (right to portability), and the right to object to certain types of processing. Data governance frameworks must include processes to address these rights promptly and effectively.
  • Data Minimization and Limitation: Privacy regulations often emphasize that organisations should collect only the data that is necessary for a specified purpose and retain it only as long as it is needed for that purpose. This requires clear data retention policies and procedures to ensure compliance and reduce risk.
  • Cross-border Data Transfers: GDPR and other regulations have specific requirements regarding the transfer of personal data across borders. Organisations must ensure that they have legal mechanisms in place, such as Standard Contractual Clauses (SCCs) or adherence to international frameworks like the EU-U.S. Privacy Shield.
  • Breach Notification: Most privacy laws require organisations to notify regulatory authorities and, in some cases, affected individuals of data breaches within a specific timeframe. Data governance policies must include breach detection, reporting, and investigation procedures to comply with these requirements.
  • Data Protection Officer (DPO): GDPR and certain other laws require organisations to appoint a Data Protection Officer if they engage in significant processing of personal data. The DPO is responsible for overseeing data protection strategies, compliance, and education.
  • Record-Keeping: Organisations are often required to maintain detailed records of data processing activities, including the purpose of processing, data categories processed, data recipient categories, and the envisaged retention times for different data categories.
  • Impact Assessments: GDPR mandates Data Protection Impact Assessments (DPIAs) for processing that is likely to result in high risks to individuals’ rights and freedoms. These assessments help organisations identify, minimize, and mitigate data protection risks.

Strategic Implications for Organisations

Organisations must integrate data protection principles early in the design phase of their projects and ensure that personal data is processed with high privacy settings by default. A lawful basis for processing data must be clearly identified and documented. Furthermore, data protection officers (DPOs) may need to be appointed to oversee compliance, particularly in large organisations or those handling sensitive data extensively.

Conclusion

Adopting a comprehensive data governance strategy is not merely about legal compliance, it is about building trust with customers and stakeholders, enhancing the operational effectiveness of the organisation, and securing a competitive advantage in the marketplace. By staying informed and agile, organisations can navigate the complexities of data governance and global privacy regulations effectively, ensuring sustainable and ethical use of their valuable data resources.

The Importance of Adhering to Personal Norms and Values – in a Natural & Artificial world

In life’s journey, our norms and values act as a compass, guiding our behaviour, decisions, and interactions with the world. Understanding these concepts and their impact on our lives is crucial for achieving job satisfaction, personal happiness, and overall health.

Defining Norms and Values

Values are the fundamental beliefs or ideals that individuals or societies hold dear. These beliefs guide priorities and motivate behaviour, influencing how we perceive what is right and wrong. Common examples of values include honesty, freedom, loyalty, and respect for others. Values are often deeply ingrained and can shape the course of one’s life.

Norms, on the other hand, are the unwritten rules that govern social behaviour. These are the expectations within a society or group about how its members should act under given circumstances. Norms can be further categorised into folkways, mores, and laws, each varying in terms of their societal importance and the severity of repercussions when breached.

The Difference Between Norms and Values

While values represent individual or collective beliefs about what is important, norms are more about actions—how those values are routinely expressed in day-to-day interactions. For instance, if a society values education highly (a value), there might be a norm that children should begin attending school at a certain age and respect their teachers.

Variation in Norms and Values

Norms and values differ among individuals due to various factors like cultural background, upbringing, education, and personal experiences. These influences can lead to a rich diversity of perspectives within communities. For example, while one culture might prioritise community and family ties, another might value individual achievements more highly.

The Importance of Maintaining Personal Norms and Values

Adhering to one’s norms and values is essential for several reasons:

  • Consistency and Integrity: Living in accordance with one’s beliefs and expectations fosters a consistent life approach, which in turn bolsters personal integrity and self-respect.
  • Job Satisfaction: When your career aligns with your personal values, it increases job satisfaction. For instance, someone who values helping others might find great satisfaction in nursing or social work.
  • Happiness in Life: Aligning actions with personal values leads to a more fulfilling life. This congruence creates a sense of purpose and decreases the internal conflict that can arise from living against one’s principles.
  • Health: Psychological research suggests that misalignment between one’s values and behaviour can lead to stress, dissatisfaction, and even mental health issues. Conversely, maintaining harmony between actions and values can promote better mental and physical health.

When personal norms and values collide with your environment

When personal norms and values conflict with those of the wider society and/or an employer, it can lead to several significant consequences, impacting both the individual and their relationships within these contexts:

  • Job Dissatisfaction and Reduced Productivity: If an individual’s values strongly clash with those of their employer, it can result in job dissatisfaction. This often leads to lower motivation and productivity. For instance, if a person values transparency and honesty but works in an environment where secrecy and political manoeuvring are the norm, they may feel disillusioned and less engaged with their work.
  • Stress and Mental Health Issues: Persistent conflict between personal values and those of one’s surroundings can cause chronic stress. This misalignment might lead the individual to continually question their decisions and actions, potentially leading to anxiety, depression, and other mental health problems.
  • Social Isolation: If an individual’s norms and values are out of sync with societal expectations, it can result in social isolation. This might occur in a community where certain beliefs or behaviours that are integral to a person’s identity are not accepted or are actively stigmatised. The feeling of being an outsider can exacerbate feelings of loneliness and alienation.
  • Ethical Dilemmas and Integrity Challenges: Individuals may face ethical dilemmas when their personal values are in opposition to those demanded by their roles or societal pressures. This can lead to difficult choices, such as compromising on personal ethics for professional gain or, conversely, risking career opportunities to maintain personal integrity.
  • Career Limitations: A misalignment of values can limit career advancement, especially in organisational cultures where ‘cultural fit’ is considered important for leadership roles. Individuals who do not share the core values of their organisation may find themselves overlooked for promotions or important projects.
  • Legal and Compliance Risks: In some cases, clashes between personal norms and societal or organisational rules can lead to legal issues, especially if an individual acts in a way that is legally compliant but against company policies, or vice versa.
  • Personal Dissatisfaction and Regret: Living in conflict with one’s personal values can lead to a profound sense of dissatisfaction and regret. This might manifest as a feeling that one is not living a ‘true’ or ‘authentic’ life, which can have long-term effects on happiness and overall well-being.

To manage these challenges, individuals often need to make deliberate choices about where to compromise and what is non-negotiable, potentially seeking environments (both professionally and personally) that better align with their own norms and values.”

Examples of how Norms and Values shape our lives

Here are some examples illustrate how personal norms and values are not just abstract concepts but are lived experiences that shape decisions, behaviors, and interactions with the world. They underscore the importance of aligning one’s actions with one’s values, which can lead to a more authentic and satisfying life.

  • Career Choices: Take the story of Maria, a software engineer who prioritized environmental sustainability. She turned down lucrative offers from companies known for their high carbon footprints and instead chose to work for a startup focused on renewable energy solutions. Maria’s decision, driven by her personal values, not only shaped her career path but also brought her a sense of fulfillment and alignment with her beliefs about environmental conservation.
  • Social Relationships: Consider the case of James, who values honesty and transparency above all. His commitment to these principles sometimes put him at odds with friends who found his directness uncomfortable. However, this same honesty fostered deeper, more trusting relationships with like-minded individuals, ultimately shaping his social circle to include friends who share and respect his values.
  • Consumer Behavior: Aisha, a consumer who holds strong ethical standards for fair trade and workers’ rights, chooses to buy products exclusively from companies that demonstrate transparency and support ethical labor practices. Her shopping habits reflect her values and have influenced her family and friends to become more conscious of where their products come from, demonstrating how personal values can ripple outward to influence a wider community.
  • Healthcare Decisions: Tom, whose religious beliefs prioritize the sanctity of life, faced a tough decision when his terminally ill spouse was offered a form of treatment that could potentially prolong life but with a low quality of life. Respecting both his and his spouse’s values, he opted for palliative care, focusing on comfort and dignity rather than invasive treatments, highlighting how deeply personal values impact critical healthcare decisions.
  • Political Engagement: Sarah is deeply committed to social justice and equality. This commitment influences her political engagement; she volunteers for political campaigns that align with her values, participates in demonstrations, and uses her social media platforms to advocate for policy changes. Her active involvement is a direct manifestation of her values in action, impacting society’s larger political landscape.

Integrating Norms and Values into AI

The integration of norms and values into artificial intelligence (AI) systems is a complex and ongoing process that involves ethical considerations, programming decisions, and the application of various AI techniques. Here are some key aspects of how norms and values are ingrained into AI:

  • Ethical Frameworks and Guidelines – AI development is guided by ethical frameworks that outline the values and norms AI systems should adhere to. These frameworks often emphasize principles such as fairness, transparency, accountability, and respect for user privacy. Organizations like the European Union, IEEE, and various national bodies have proposed ethical guidelines that shape how AI systems are developed and deployed.
  • Training Data – The norms and values of an AI system are often implicitly embedded in the training data used to develop the system. The data reflects historical, cultural, and social norms of the time and place from which it was collected. If the data includes biases or reflects societal inequalities, these can inadvertently become part of the AI’s “learned” norms and values. Therefore, ensuring that training data is diverse and representative is crucial to align AI behavior with desired ethical standards.
  • Design Choices – The algorithms and models chosen for an AI system also reflect certain values. For example, choosing to prioritize accuracy over fairness in predictive policing software might reflect a value system that overlooks the importance of equitable outcomes. Design decisions also encompass the transparency of the AI system, such as whether its decisions can be easily interpreted by humans, which relates to the value of transparency and accountability.
  • Stakeholder Engagement – Involving a diverse range of stakeholders in the AI development process helps incorporate a broader spectrum of norms and values. This can include ethicists, community representatives, potential users, and domain experts. Their input can guide the development process to consider various ethical implications and societal needs, ensuring the AI system is more aligned with public values.
  • Regulation and Compliance – Regulations and legal frameworks play a significant role in embedding norms and values in AI. Compliance with data protection laws (like GDPR in the EU), nondiscrimination laws, and other regulations ensures that AI systems adhere to certain societal norms and legal standards, shaping their behavior and operational limits.
  • Continuous Monitoring and Adaptation – AI systems are often monitored throughout their lifecycle to ensure that they continue to operate within the intended ethical boundaries. This involves ongoing assessments to identify and mitigate any emergent behaviors or biases that could violate societal norms or individual rights.
  • AI Ethics in Practice – Implementation of AI ethics involves developing tools and methods that can audit, explain, and correct AI behavior. This includes techniques for fairness testing, bias mitigation, and explainable AI (XAI), which seeks to make AI decisions understandable to humans.

By embedding norms and values in these various aspects of AI development and operation, developers aim to create AI systems that are not only effective but also ethically responsible and socially beneficial.

Integrating norms and values into artificial intelligence (AI) systems is crucial for ensuring these technologies operate in ways that are ethical, socially responsible, and beneficial to society. As AI systems increasingly perform tasks traditionally done by humans—from driving cars to making medical diagnoses—they must do so within the framework of societal expectations and ethical standards.

The importance of embedding norms and values into AI systems lies primarily in fostering trust and acceptance among users and stakeholders – encouraging integrity. When AI systems operate transparently and adhere to established ethical guidelines, they are more likely to be embraced by the public. Trust is particularly vital in sensitive areas such as healthcare, law enforcement, and financial services, where decisions made by AI can have profound impacts on people’s lives.

Moreover, embedding norms and values in AI helps to prevent and mitigate risks associated with bias and discrimination. AI systems trained on historical data can inadvertently perpetuate existing biases if these data reflect societal inequalities. By consciously integrating values such as fairness and equality into AI systems, developers can help ensure that AI applications do not reinforce negative stereotypes or unequal treatment.

Ethically aligned AI also supports regulatory compliance and reduces legal risks. With jurisdictions around the world beginning to implement laws specifically addressing AI, integrating norms and values into AI systems becomes not only an ethical imperative but a legal requirement. This helps companies avoid penalties and reputational damage associated with non-compliance.

Conclusion

Maintaining fidelity to your norms and values is not just about personal pride or integrity, it significantly influences your emotional and physical well-being. As society continually evolves, it becomes increasingly important to reflect on and adjust our values and norms to ensure they truly represent who we are and aspire to be. In this way, we can navigate life’s challenges more successfully and lead more satisfying lives.

Integrating norms and values into AI systems is not just about avoiding harm or fulfilling legal obligations, it’s about creating technologies that enhance societal well-being, promote justice, and enrich human life – cultivating a symbiotic relationship between human and machine. As AI technologies continue to evolve and permeate every aspect of our lives, maintaining this ethical alignment will be essential for achieving the full positive potential of AI while safeguarding against its risks.

Optimising Cloud Management: A Comprehensive Comparison of Bicep and Terraform for Azure Deployment

In the evolutionary landscape of cloud computing, the ability to deploy and manage infrastructure efficiently is paramount. Infrastructure as Code (IaC) has emerged as a pivotal practice, enabling developers and IT operations teams to automate the provisioning of infrastructure through code. This practice not only speeds up the deployment process but also enhances consistency, reduces the potential for human error, and facilitates scalability and compliance.

Among the tools at the forefront of this revolution are Bicep and Terraform, both of which are widely used for managing resources on Microsoft Azure, one of the leading cloud service platforms. Bicep, developed by Microsoft, is designed specifically for Azure, offering a streamlined approach to managing Azure resources. On the other hand, Terraform, developed by HashiCorp, provides a more flexible, multi-cloud solution, capable of handling infrastructure across various cloud environments including Azure, AWS, and Google Cloud.

The choice between Bicep and Terraform can significantly influence the efficiency and effectiveness of cloud infrastructure management. This article delves into a detailed comparison of these two tools, exploring their capabilities, ease of use, and best use cases to help you make an informed decision that aligns with your organisational needs and cloud strategies.

Bicep and Terraform are both popular Infrastructure as Code (IaC) tools used to manage and provision infrastructure, especially for cloud platforms like Microsoft Azure. Here’s a detailed comparison of the two, focusing on key aspects such as design philosophy, ease of use, community support, and integration capabilities:

  • Language and Syntax
    • Bicep:
      Bicep is a domain-specific language (DSL) developed by Microsoft specifically for Azure. Its syntax is cleaner and more concise compared to ARM (Azure Resource Manager) templates. Bicep is designed to be easy to learn for those familiar with ARM templates, offering a declarative syntax that directly transcompiles into ARM templates.
    • Terraform:
      Terraform uses its own configuration language called HashiCorp Configuration Language (HCL), which is also declarative. HCL is known for its human-readable syntax and is used to manage a wide variety of services beyond just Azure. Terraform’s language is more verbose compared to Bicep but is powerful in expressing complex configurations.
  • Platform Support
    • Bicep:
      Bicep is tightly integrated with Azure and is focused solely on Azure resources. This means it has excellent support for new Azure features and services as soon as they are released.
    • Terraform:
      Terraform is platform-agnostic and supports multiple providers including Azure, AWS, Google Cloud, and many others. This makes it a versatile tool if you are managing multi-cloud environments or need to handle infrastructure across different cloud platforms.
  • State Management
    • Bicep:
      Bicep relies on ARM for state management. Since ARM itself manages the state of resources, Bicep does not require a separate mechanism to keep track of resource states. This can simplify operations but might offer less control compared to Terraform.
    • Terraform:
      Terraform maintains its own state file which tracks the state of managed resources. This allows for more complex dependency tracking and precise state management but requires careful handling, especially in team environments to avoid state conflicts.
  • Tooling and Integration
    • Bicep:
      Bicep integrates seamlessly with Azure DevOps and GitHub Actions for CI/CD pipelines, leveraging native Azure tooling and extensions. It is well-supported within the Azure ecosystem, including integration with Azure Policy and other governance tools.
    • Terraform:
      Terraform also integrates well with various CI/CD tools and has robust support for modules which can be shared across teams and used to encapsulate complex setups. Terraform’s ecosystem includes Terraform Cloud and Terraform Enterprise, which provide advanced features for teamwork and governance.
  • Community and Support
    • Bicep:
      As a newer and Azure-specific tool, Bicep’s community is smaller but growing. Microsoft actively supports and updates Bicep. The community is concentrated around Azure users.
    • Terraform:
      Terraform has a large and active community with a wide range of custom providers and modules contributed by users around the world. This vast community support makes it easier to find solutions and examples for a variety of use cases.
  • Configuration as Code (CaC)
    • Bicep and Terraform:
      Both tools support Configuration as Code (CaC) principles, allowing not only the provisioning of infrastructure but also the configuration of services and environments. They enable codifying setups in a manner that is reproducible and auditable.

This table outlines key differences between Bicep and Terraform (outlined above), helping you to determine which tool might best fit your specific needs, especially in relation to deploying and managing resources in Microsoft Azure for Infrastructure as Code (IaC) and Configuration as Code (CaC) development.

FeatureBicepTerraform
Language & SyntaxSimple, concise DSL designed for Azure.HashiCorp Configuration Language (HCL), versatile and expressive.
Platform SupportAzure-specific with excellent support for Azure features.Multi-cloud support, including Azure, AWS, Google Cloud, etc.
State ManagementUses Azure Resource Manager; no separate state management needed.Manages its own state file, allowing for complex configurations and dependency tracking.
Tooling & IntegrationDeep integration with Azure services and CI/CD tools like Azure DevOps.Robust support for various CI/CD tools, includes Terraform Cloud for advanced team functionalities.
Community & SupportSmaller, Azure-focused community. Strong support from Microsoft.Large, active community. Extensive range of modules and providers available.
Use CaseIdeal for exclusive Azure environments.Suitable for complex, multi-cloud environments.

Conclusion

Bicep might be more suitable if your work is focused entirely on Azure due to its simplicity and deep integration with Azure services. Terraform, on the other hand, would be ideal for environments where multi-cloud support is required, or where more granular control over infrastructure management and versioning is necessary. Each tool has its strengths, and the choice often depends on specific project requirements and the broader technology ecosystem in which your infrastructure operates.

Embracing Efficiency: The FinOps Framework Revolution

In an era where cloud computing is the backbone of digital transformation, managing cloud costs effectively has become paramount for businesses aiming for growth and sustainability. This is where the FinOps Framework enters the scene, a game-changer in the financial management of cloud services. Let’s dive into what FinOps is, how to implement it, and explore its benefits through real-life examples.

What is the FinOps Framework?

The FinOps Framework is a set of practices designed to bring financial accountability to the variable spend model of the cloud, enabling organisations to get the most value out of every pound spent. FinOps, short for Financial Operations, combines the disciplines of finance, operations, and engineering to ensure that cloud investments are aligned with business outcomes and that every pound spent on the cloud brings value to the organisation.

The FinOps Framework refers to a set of practices and principles designed to help organisations manage and optimise cloud spending efficiently.

The core of the FinOps Framework revolves around a few key principles:

  • Collaboration and Accountability: Encouraging a culture of financial accountability across different departments and teams, enabling them to work together to manage and optimise cloud costs.
  • Real-time Decision Making: Utilising real-time data to make informed decisions about cloud usage and expenditures, enabling teams to adjust their strategies quickly as business needs and cloud offerings evolve.
  • Optimisation and Efficiency: Continuously seeking ways to improve the efficiency of cloud investments, through cost optimisation strategies such as selecting the right mix of cloud services, identifying unused or underutilised resources, and leveraging commitments or discounts offered by cloud providers.

Financial Management and Reporting: Implementing tools and processes to track, report, and forecast cloud spending accurately, ensuring transparency and enabling better budgeting and forecasting.

Culture of Cloud Cost Management: Embedding cost considerations into the organisational culture and the lifecycle of cloud usage, from planning and budgeting to deployment and operations.

Governance and Control: Establishing policies and controls to manage cloud spend without hindering agility or innovation, ensuring that cloud investments are aligned with business objectives.

The FinOps Foundation, an independent organisation, plays a pivotal role in promoting and advancing the FinOps discipline by providing education, best practices, and industry benchmarks. The organisation supports the FinOps community by offering certifications, resources, and forums for professionals to share insights and strategies for cloud cost management.”

This version tweaks a few spellings and terms (e.g., “organisation” instead of “organization,” “optimise” instead of “optimize”) to match British English usage more closely.

Implementing FinOps: A Step-by-Step Guide

  1. Establish a Cross-Functional Team: Start by forming a FinOps team that includes members from finance, IT, and business units. This team is responsible for driving FinOps practices throughout the organisation.
  2. Understand Cloud Usage and Costs: Implement tools and processes to gain visibility into your cloud spending. This involves tracking usage and costs in real-time, identifying trends, and pinpointing areas of inefficiency.
  3. Create a Culture of Accountability: Promote a culture where every team member is aware of cloud costs and their impact on the organisation. Encourage teams to take ownership of their cloud usage and spending.
  4. Optimise Existing Resources: Regularly review and adjust your cloud resources. Look for opportunities to resize, remove, or replace resources to ensure you are only paying for what you need.
  5. Forecast and Budget: Develop accurate forecasting and budgeting processes that align with your cloud spending trends. This helps in better financial planning and reduces surprises in cloud costs.
  6. Implement Governance and Control: Establish policies and governance mechanisms to control cloud spending without stifling innovation. This includes setting spending limits and approval processes for cloud services.

The Benefits of Adopting FinOps

Cost Optimisation: By gaining visibility into cloud spending, organisations can identify wasteful expenditure and optimise resource usage, leading to significant cost savings.

Enhanced Agility: FinOps practices enable businesses to adapt quickly to changing needs by making informed decisions based on real-time data, thus improving operational agility.

Better Collaboration: The framework fosters collaboration between finance, operations, and engineering teams, breaking down silos and enhancing overall efficiency.

Informed Decision-Making: With detailed insights into cloud costs and usage, businesses can make informed decisions that align with their strategic objectives.

Real-Life Examples

A Global Retail Giant: By implementing FinOps practices, this retail powerhouse was able to reduce its cloud spending by 30% within the first year. The company achieved this by identifying underutilised resources and leveraging committed use discounts from their cloud provider.

A Leading Online Streaming Service: This entertainment company used FinOps to manage its massive cloud infrastructure more efficiently. Through detailed cost analysis and resource optimisation, they were able to handle growing subscriber numbers without proportionally increasing cloud costs.

A Tech Start-up: A small but rapidly growing tech firm adopted FinOps early in its journey. This approach enabled the start-up to scale its operations seamlessly, maintaining control over cloud costs even as their usage skyrocketed.

Conclusion

The FinOps Framework is not just about cutting costs; it’s about maximising the value of cloud investments in a disciplined and strategic manner. By fostering collaboration, enhancing visibility, and promoting a culture of accountability, organisations can turn their cloud spending into a strategic advantage. As cloud computing continues to evolve, adopting FinOps practices will be key to navigating the complexities of cloud management, ensuring businesses remain competitive in the digital age.

AI in practice for the enterprise: Navigating the Path to Success

In just a few years, Artificial Intelligence (AI) has emerged as a transformative force for businesses across sectors. Its potential to drive innovation, efficiency, and competitive advantage is undeniable. Yet, many enterprises find themselves grappling with the challenge of harnessing AI’s full potential. This blog post delves into the critical aspects that can set businesses up for success with AI, exploring the common pitfalls, the risks of staying on the sidelines, and the foundational pillars necessary for AI readiness.

Why Many Enterprises Struggle to Use AI Effectively

Despite the buzz around AI, a significant number of enterprises struggle to integrate it effectively into their operations. The reasons are manifold:

  • Lack of Clear Strategy: Many organisations dive into AI without a strategic framework, leading to disjointed efforts and initiatives that fail to align with business objectives.
  • Data Challenges: AI thrives on data. However, issues with data quality, accessibility, and integration can severely limit AI’s effectiveness. Many enterprises are sitting on vast amounts of unstructured data, which remains untapped due to these challenges.
  • Skill Gap: There’s a notable skill gap in the market. The demand for AI expertise far outweighs the supply, leaving many enterprises scrambling to build or acquire the necessary talent.
  • Cultural Resistance: Implementing AI often requires significant cultural and operational shifts. Resistance to change can stifle innovation and slow down AI adoption.

The Risks of Ignoring AI

In the digital age, failing to leverage AI can leave enterprises at a significant disadvantage. Here are some of the critical opportunities missed:

  • Lost Competitive Edge: Competitors who effectively utilise AI can gain a significant advantage in terms of efficiency, customer insights, and innovation, leaving others behind.
  • Inefficiency: Without AI, businesses may continue to rely on manual, time-consuming processes, leading to higher costs and lower productivity.
  • Missed Insights: AI has the power to unlock deep insights from data. Without it, enterprises miss out on opportunities to make informed decisions and anticipate market trends.

Pillars of Data and AI Readiness

To harness the power of AI, enterprises need to build on the following foundational pillars:

  • Data Governance and Quality: Establishing strong data governance practices ensures that data is accurate, accessible, and secure. Quality data is the lifeblood of effective AI systems.
  • Strategic Alignment: AI initiatives must be closely aligned with business goals and integrated into the broader digital transformation strategy.
  • Talent and Culture: Building or acquiring AI expertise is crucial. Equally important is fostering a culture that embraces change, innovation, and continuous learning.
  • Technology Infrastructure: A robust and scalable technology infrastructure, including cloud computing and data analytics platforms, is essential to support AI initiatives.

Best Practices for AI Success

To maximise the benefits of AI, enterprises should consider the following best practices:

  • Start with a Pilot: Begin with manageable, high-impact projects. This approach allows for learning and adjustments before scaling up.
  • Focus on Data Quality: Invest in systems and processes to clean, organise, and enrich data. High-quality data is essential for training effective AI models.
  • Embrace Collaboration: AI success often requires collaboration across departments and with external partners. This approach ensures a diversity of skills and perspectives.
  • Continuous Learning and Adaptation: The AI landscape is constantly evolving. Enterprises must commit to ongoing learning and adaptation to stay ahead.

Conclusion

While integrating AI into enterprise operations presents challenges, the potential rewards are too significant to ignore. By understanding the common pitfalls, the risks of inaction, and the foundational pillars of AI readiness, businesses can set themselves up for success. Embracing best practices will not only facilitate the effective use of AI but also ensure that enterprises remain competitive in the digital era.

Embracing the “Think Product” Mindset in Software Development

In the realm of software development, shifting from a project-centric to a product-oriented mindset can be a game-changer for both developers and businesses alike. This paradigm, often encapsulated in the phrase “think product,” urges teams to design and build software solutions with the flexibility, scalability, and vision of a product intended for a broad audience. This approach not only enhances the software’s utility and longevity but also maximises the economies of scale, making the development process more efficient and cost-effective in the long run.

The Core of “Think Product”

The essence of “think product” lies in the anticipation of future needs and the creation of solutions that are not just tailored to immediate requirements but are adaptable, scalable, and capable of evolving over time. This involves embracing best practices such as reusability, modularity, service orientation, generality, client-agnosticism, and parameter-driven design.

Reusability: The Building Blocks of Efficiency

Reusability is about creating software components that can be easily repurposed across different projects or parts of the same project. This approach minimises duplication of effort, fosters consistency, and speeds up the development process. By focusing on reusability, developers can construct a library of components, functions, and services that serve as a versatile toolkit for building new solutions more swiftly and efficiently.

Modularity: Independence and Integration

Modularity involves designing software in self-contained units or modules that can operate independently but can be integrated seamlessly to form a larger system. This facilitates easier maintenance, upgrades, and scalability, as changes can be made to individual modules without impacting the entire system. Modularity also enables parallel development, where different teams work on separate modules simultaneously, thus accelerating the development cycle.

Service Orientation: Flexibility and Scalability

Service-oriented architecture (SOA) emphasises creating software solutions as a collection of services that communicate and operate together. This approach enhances flexibility, as services can be reused, replaced, or scaled independently of each other. It also promotes interoperability, making it easier to integrate with external systems and services.

Generality: Beyond Specific Use Cases

Designing software with generality in mind means creating solutions that are not overly specialised to a specific task or client. Instead, they are versatile enough to accommodate a range of requirements. This broader applicability maximises the potential user base and market relevance of the software, contributing to its longevity and success.

Client Agnosticism: Serving a Diverse Audience

A client-agnostic approach ensures that software solutions are compatible across various platforms, devices, and user environments. This universality makes the product accessible to a wider audience, enhancing its marketability and usability across different contexts.

Parameter-Driven Design: Flexibility at Its Core

Parameter-driven design allows software behaviour and features to be customised through external parameters or configuration files, rather than hardcoded values. This adaptability enables the software to cater to diverse user needs and scenarios without requiring significant code changes, making it more versatile and responsive to market demands.

Cultivating the “Think Product” Mindset

Adopting a “think product” mindset necessitates a cultural shift within the development team and the broader organisation. It involves embracing long-term thinking, prioritising quality and scalability, and being open to feedback and adaptation. This mindset encourages continuous improvement, innovation, and a focus on delivering value to a wide range of users.

By integrating best practices like reusability, modularity, service orientation, generality, client agnosticism, and parameter-driven design, developers can create software solutions that stand the test of time. These practices not only contribute to the creation of superior products but also foster a development ecosystem that is more sustainable, efficient, and prepared to meet the challenges of an ever-evolving technological landscape.

Unlocking Developer Potential: Strategies for Building High-Performing Tech Teams

Introduction

Attracting and retaining top developer talent is crucial for technology leaders, especially in a highly competitive landscape. With software innovation driving business growth, organisations with high-performing engineering cultures gain a significant advantage. Fostering this culture goes beyond perks; it requires a thoughtful approach to talent management that prioritises the developer experience.

This blog post explores strategies to enhance talent management and create an environment where developers thrive. By fostering psychological safety, investing in top-tier tools, and offering meaningful growth opportunities, we can boost innovation, productivity, and satisfaction. Let’s dive in and unlock the full potential of our development teams.

1. Understanding the Importance of Developer Experience

Before diving into specific tactics, it’s important to understand why prioritising developer experience matters:

  • Attracting Top Talent: In a competitive job market, developers can choose their employers. Organisations that offer opportunities for experimentation, stay abreast of the latest technologies, and focus on outcomes over outputs have an edge in attracting the best talent.
  • Boosting Productivity and Innovation: Supported, empowered, and engaged developers bring their best to work daily, resulting in higher productivity, faster problem-solving, and innovative solutions.
  • Reducing Turnover: Developers who feel valued and fulfilled are less likely to leave, improving retention rates and reducing the costs associated with constant hiring and training.

2. Fostering Psychological Safety

Psychological safety—the belief that one can speak up, take risks, and make mistakes without fear of punishment—is essential for high-performing teams. Here’s how to cultivate it:

  • Encourage Open Communication: Create an environment where developers feel safe sharing ideas, asking questions, and providing feedback. Use one-on-ones, team meetings, and anonymous surveys to solicit input.
  • Embrace Failure as Learning: Frame mistakes as learning opportunities rather than assigning blame. Encourage developers to share their failures and lessons learned.
  • Model Vulnerability: Leaders set the tone. By admitting mistakes and asking for help, we create space for others to do the same.

3. Investing in World-Class Tools

Providing the best tools boosts productivity, creativity, and job satisfaction. Focus on these areas:

  • Hardware and Software: Equip your team with high-performance computers, multiple monitors, and ergonomic peripherals. Regularly update software licences.
  • Development Environments: Offer cutting-edge IDEs, version control systems, and collaboration tools. Automate tasks like code formatting and testing.
  • Infrastructure: Ensure your development, staging, and production environments are reliable, scalable, and easy to work with. Embrace cloud technologies and infrastructure-as-code for rapid iteration and deployment.

4. Providing Meaningful Growth Opportunities

Developers thrive on challenge and growth. Here’s how to keep them engaged:

  • Tailored Learning Paths: Work with each developer to create a personalised learning plan aligned with their career goals. Provide access to online courses, face-to-face training, conferences, and mentorship.
  • Encourage Side Projects: Give developers time for passion projects to stretch their skills. Host hackathons or innovation days to spark new ideas.
  • Create Leadership Opportunities: Identify high-potential developers and offer chances to lead projects, mentor juniors, or present work to stakeholders.

5. Measuring and Iterating

Measure the impact of talent management efforts and continuously improve:

  • Developer Satisfaction: Survey your team regularly to gauge happiness, engagement, and psychological safety. Look for trends and areas for improvement.
  • Productivity Metrics: Track key performance indicators such as Objectives and Key Results (OKRs), cycle time, defect rates, and feature throughput. Celebrate successes and identify opportunities to streamline processes.
  • Retention Rates: Monitor turnover and conduct exit interviews to understand why developers leave. Use these insights to refine your approach.

6. Partnering with HR

Enhancing developer experience requires collaboration with HR:

  • Collaborate on Hiring: Work with recruiters to create compelling job descriptions and interview processes that highlight your commitment to the developer experience.
  • Align on Performance Management: Ensure that performance reviews, compensation, and promotions align with your talent management philosophy. Advocate for practices that reward innovation and growth.
  • Champion Diversity, Equality, and Inclusion: Partner with HR to create initiatives that foster a diverse and inclusive culture, driving innovation through multiple perspectives.

7. Building a Community of Practice

Build a sense of community among your developers:

  • Host Regular Events: Organise meetups, lunch-and-learns, or hackathons for knowledge sharing and collaboration.
  • Create Communication Channels: Use Slack, Microsoft Teams, or other tools for technical discussions and informal conversations.
  • Celebrate Successes: Regularly recognise and reward developers who exemplify your values or achieve significant milestones.

Conclusion

In conclusion, cultivating a high-performing tech team goes beyond simply hiring skilled developers, it requires a strategic and holistic approach to talent management. By prioritising psychological safety, investing in superior tools, and providing avenues for meaningful growth, organisations can not only attract top talent but also nurture a culture of innovation and satisfaction. Regular assessment of these strategies through feedback, performance metrics, and collaboration with HR can further refine and enhance the developer experience. By committing to these principles, technology leaders can build resilient, innovative teams that are well-equipped to drive business success in an ever-evolving digital landscape. Let’s take these insights forward and transform our development teams into powerful engines of growth and innovation.

Embracing Bimodal Model: A Data-Driven Journey for Modern Organisations

With data being the live blood of organisations the emphasis on data management places organisations on a continuous search for innovative approaches to harness and optimise the power of their data assets. In this pursuit, the bimodal model is a well established strategy that can be successfully employed by data-driven enterprises. This approach, which combines the stability of traditional data management with the agility of modern data practices, while providing a delivery methodology facilitating rapid innovation and resilient technology service provision.

Understanding the Bimodal Model

Gartner states: “Bimodal IT is the practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasising safety and accuracy. Mode 2 is exploratory and nonlinear, emphasising agility and speed.

At its core, the bimodal model advocates for a dual approach to data management. Mode 1 focuses on the stable, predictable aspects of data, ensuring the integrity, security, and reliability of core business processes. This mode aligns with traditional data management practices, where accuracy and consistency are paramount. On the other hand, Mode 2 emphasizes agility, innovation, and responsiveness to change. It enables organizations to explore emerging technologies, experiment with new data sources, and adapt swiftly to evolving business needs.

Benefits of Bimodal Data Management

1. Optimised Performance and Stability: Mode 1 ensures that essential business functions operate smoothly, providing a stable foundation for the organization.

Mode 1 of the bimodal model is dedicated to maintaining the stability and reliability of core business processes. This is achieved through robust data governance, stringent quality controls, and established best practices in data management. By ensuring the integrity of data and the reliability of systems, organizations can optimise the performance of critical operations. This stability is especially crucial for industries where downtime or errors can have significant financial or operational consequences, such as finance, healthcare, and manufacturing.

Example: In the financial sector, a major bank implemented the bimodal model to enhance its core banking operations. Through Mode 1, the bank ensured the stability of its transaction processing systems, reducing system downtime by 20% and minimizing errors in financial transactions. This stability not only improved customer satisfaction but also resulted in a 15% increase in operational efficiency, as reported in the bank’s annual report.

2. Innovation and Agility: Mode 2 allows businesses to experiment with cutting-edge technologies like AI, machine learning, and big data analytics, fostering innovation and agility in decision-making processes.

Mode 2 is the engine of innovation within the bimodal model. It provides the space for experimentation with emerging technologies and methodologies. Businesses can leverage AI, machine learning, and big data analytics to uncover new insights, identify patterns, and make informed decisions. This mode fosters agility by encouraging a culture of continuous improvement and adaptation to technological advancements. It enables organizations to respond quickly to market trends, customer preferences, and competitive challenges, giving them a competitive edge in dynamic industries.

Example: A leading e-commerce giant adopted the bimodal model to balance stability and innovation in its operations. Through Mode 2, the company integrated machine learning algorithms into its recommendation engine. As a result, the accuracy of personalized product recommendations increased by 25%, leading to a 10% rise in customer engagement and a subsequent 12% growth in overall sales. This successful integration of Mode 2 practices directly contributed to the company’s market leadership in the highly competitive online retail space.

3. Enhanced Scalability: The bimodal approach accommodates the scalable growth of data-driven initiatives, ensuring that the organization can handle increased data volumes efficiently.

In the modern data landscape, the volume of data generated is growing exponentially. Mode 1 ensures that foundational systems are equipped to handle increasing data loads without compromising performance or stability. Meanwhile, Mode 2 facilitates the implementation of scalable technologies and architectures, such as cloud computing and distributed databases. This combination allows organizations to seamlessly scale their data infrastructure, supporting the growth of data-driven initiatives without experiencing bottlenecks or diminishing performance.

Example: A global technology firm leveraged the bimodal model to address the challenges of data scalability in its cloud-based services. In Mode 1, the company optimized its foundational cloud infrastructure, ensuring uninterrupted service during periods of increased data traffic. Simultaneously, through Mode 2 practices, the firm adopted containerization and microservices architecture, resulting in a 30% improvement in scalability. This enhanced scalability enabled the company to handle a 50% surge in user data without compromising performance, leading to increased customer satisfaction and retention.

4. Faster Time-to-Insights: By leveraging Mode 2 practices, organizations can swiftly analyze new data sources, enabling faster extraction of valuable insights for strategic decision-making.

Mode 2 excels in rapidly exploring and analyzing new and diverse data sources. This capability significantly reduces the time it takes to transform raw data into actionable insights. Whether it’s customer feedback, market trends, or operational metrics, Mode 2 practices facilitate agile and quick analysis. This speed in obtaining insights is crucial in fast-paced industries where timely decision-making is a competitive advantage.

Example: A healthcare organization implemented the bimodal model to expedite the analysis of patient data for clinical decision-making. Through Mode 2, the organization utilized advanced analytics and machine learning algorithms to process diagnostic data. The implementation led to a 40% reduction in the time required for diagnosis, enabling medical professionals to make quicker and more accurate decisions. This accelerated time-to-insights not only improved patient outcomes but also contributed to the organization’s reputation as a leader in adopting innovative healthcare technologies.

5. Adaptability in a Dynamic Environment: Bimodal data management equips organizations to adapt to market changes, regulatory requirements, and emerging technologies effectively.

In an era of constant change, adaptability is a key determinant of organizational success. Mode 2’s emphasis on experimentation and innovation ensures that organizations can swiftly adopt and integrate new technologies as they emerge. Additionally, the bimodal model allows organizations to navigate changing regulatory landscapes by ensuring that core business processes (Mode 1) comply with existing regulations while simultaneously exploring new approaches to meet evolving requirements. This adaptability is particularly valuable in industries facing rapid technological advancements or regulatory shifts, such as fintech, healthcare, and telecommunications.

Example: A telecommunications company embraced the bimodal model to navigate the dynamic landscape of regulatory changes and emerging technologies. In Mode 1, the company ensured compliance with existing telecommunications regulations. Meanwhile, through Mode 2, the organization invested in exploring and adopting 5G technologies. This strategic approach allowed the company to maintain regulatory compliance while positioning itself as an early adopter of 5G, resulting in a 25% increase in market share and a 15% growth in revenue within the first year of implementation.

Implementation Challenges and Solutions

Implementing a bimodal model in data management is not without its challenges. Legacy systems, resistance to change, and ensuring a seamless integration between modes can pose significant hurdles. However, these challenges can be overcome through a strategic approach that involves comprehensive training, fostering a culture of innovation, and investing in robust data integration tools.

1. Legacy Systems: Overcoming the Weight of Tradition

Challenge: Many organizations operate on legacy systems that are deeply ingrained in their processes. These systems, often built on older technologies, can be resistant to change, making it challenging to introduce the agility required by Mode 2.

Solution: A phased approach is crucial when dealing with legacy systems. Organizations can gradually modernize their infrastructure, introducing new technologies and methodologies incrementally. This could involve the development of APIs to bridge old and new systems, adopting microservices architectures, or even considering a hybrid cloud approach. Legacy system integration specialists can play a key role in ensuring a smooth transition and minimizing disruptions.

2. Resistance to Change: Shifting Organizational Mindsets

Challenge: Resistance to change is a common challenge when implementing a bimodal model. Employees accustomed to traditional modes of operation may be skeptical or uncomfortable with the introduction of new, innovative practices.

Solution: Fostering a culture of change is essential. This involves comprehensive training programs to upskill employees on new technologies and methodologies. Additionally, leadership plays a pivotal role in communicating the benefits of the bimodal model, emphasizing how it contributes to both stability and innovation. Creating cross-functional teams that include members from different departments and levels of expertise can also promote collaboration and facilitate a smoother transition.

3. Seamless Integration Between Modes: Ensuring Cohesion

Challenge: Integrating Mode 1 (stability-focused) and Mode 2 (innovation-focused) operations seamlessly can be complex. Ensuring that both modes work cohesively without compromising the integrity of data or system reliability is a critical challenge.

Solution: Implementing robust data governance frameworks is essential for maintaining cohesion between modes. This involves establishing clear protocols for data quality, security, and compliance. Organizations should invest in integration tools that facilitate communication and data flow between different modes. Collaboration platforms and project management tools that promote transparency and communication can bridge the gap between teams operating in different modes, fostering a shared understanding of goals and processes.

4. Lack of Skillset: Nurturing Expertise for Innovation

Challenge: Mode 2 often requires skills in emerging technologies such as artificial intelligence, machine learning, and big data analytics. Organizations may face challenges in recruiting or upskilling their workforce to meet the demands of this innovative mode.

Solution: Investing in training programs, workshops, and certifications can help bridge the skills gap. Collaboration with educational institutions or partnerships with specialized training providers can ensure that employees have access to the latest knowledge and skills. Creating a learning culture within the organization, where employees are encouraged to explore and acquire new skills, is vital for the success of Mode 2.

5. Overcoming Silos: Encouraging Cross-Functional Collaboration

Challenge: Siloed departments and teams can hinder the flow of information and collaboration between Mode 1 and Mode 2 operations. Communication breakdowns can lead to inefficiencies and conflicts.

Solution: Breaking down silos requires a cultural shift and the implementation of cross-functional teams. Encouraging open communication channels, regular meetings between teams from different modes, and fostering a shared sense of purpose can facilitate collaboration. Leadership should promote a collaborative mindset, emphasizing that both stability and innovation are integral to the organization’s success.

By addressing these challenges strategically, organizations can create a harmonious bimodal environment that combines the best of both worlds—ensuring stability in core operations while fostering innovation to stay ahead in the dynamic landscape of data-driven decision-making.

Case Studies: Bimodal Success Stories

Several forward-thinking organiSations have successfully implemented the bimodal model to enhance their data management capabilities. Companies like Netflix, Amazon, and Airbnb have embraced this approach, allowing them to balance stability with innovation, leading to improved customer experiences and increased operational efficiency.

Netflix: Balancing Stability and Innovation in Entertainment

Netflix, a pioneer in the streaming industry, has successfully implemented the bimodal model to revolutionize the way people consume entertainment. In Mode 1, Netflix ensures the stability of its streaming platform, focusing on delivering content reliably and securely. This includes optimizing server performance, ensuring data integrity, and maintaining a seamless user experience. Simultaneously, in Mode 2, Netflix harnesses the power of data analytics and machine learning to personalize content recommendations, optimize streaming quality, and forecast viewer preferences. This innovative approach has not only enhanced customer experiences but also allowed Netflix to stay ahead in a highly competitive and rapidly evolving industry.

Amazon: Transforming Retail with Data-Driven Agility

Amazon, a global e-commerce giant, employs the bimodal model to maintain the stability of its core retail operations while continually innovating to meet customer expectations. In Mode 1, Amazon focuses on the stability and efficiency of its e-commerce platform, ensuring seamless transactions and reliable order fulfillment. Meanwhile, in Mode 2, Amazon leverages advanced analytics and artificial intelligence to enhance the customer shopping experience. This includes personalized product recommendations, dynamic pricing strategies, and the use of machine learning algorithms to optimize supply chain logistics. The bimodal model has allowed Amazon to adapt to changing market dynamics swiftly, shaping the future of e-commerce through a combination of stability and innovation.

Airbnb: Personalizing Experiences through Data Agility

Airbnb, a disruptor in the hospitality industry, has embraced the bimodal model to balance the stability of its booking platform with continuous innovation in user experiences. In Mode 1, Airbnb ensures the stability and security of its platform, facilitating millions of transactions globally. In Mode 2, the company leverages data analytics and machine learning to personalize user experiences, providing tailored recommendations for accommodations, activities, and travel destinations. This approach not only enhances customer satisfaction but also allows Airbnb to adapt to evolving travel trends and preferences. The bimodal model has played a pivotal role in Airbnb’s ability to remain agile in a dynamic market while maintaining the reliability essential for its users.

Key Takeaways from Case Studies:

  1. Strategic Balance: Each of these case studies highlights the strategic balance achieved by these organizations through the bimodal model. They effectively manage the stability of core operations while innovating to meet evolving customer demands.
  2. Customer-Centric Innovation: The bimodal model enables organizations to innovate in ways that directly benefit customers. Whether through personalized content recommendations (Netflix), dynamic pricing strategies (Amazon), or tailored travel experiences (Airbnb), these companies use Mode 2 to create value for their users.
  3. Agile Response to Change: The case studies demonstrate how the bimodal model allows organizations to respond rapidly to market changes. Whether it’s shifts in consumer behavior, emerging technologies, or regulatory requirements, the dual approach ensures adaptability without compromising operational stability.
  4. Competitive Edge: By leveraging the bimodal model, these organizations gain a competitive edge in their respective industries. They can navigate challenges, seize opportunities, and continually evolve their offerings to stay ahead in a fast-paced and competitive landscape.

Conclusion

In the contemporary business landscape, characterised by the pivotal role of data as the cornerstone of organizational vitality, the bimodal model emerges as a strategic cornerstone for enterprises grappling with the intricacies of modern data management. Through the harmonious integration of stability and agility, organizations can unveil the full potential inherent in their data resources. This synergy propels innovation, enhances decision-making processes, and, fundamentally, positions businesses to achieve a competitive advantage within the dynamic and data-centric business environment. Embracing the bimodal model transcends mere preference; it represents a strategic imperative for businesses aspiring to not only survive but thrive in the digital epoch.

Also read – “How to Innovate to Stay Relevant

Decoding the CEO’s Wishlist: What CEOs Seek in Their CTOs

The key difference between a Chief Information Officer (CIO) and a Chief Technology Officer (CTO) lies in their strategic focus and responsibilities within an organisation. A CIO primarily oversees the management and strategic use of information and data, ensuring that IT systems align with business objectives, enhancing operational efficiency, managing risk, and ensuring data security and compliance. On the other hand, a CTO concentrates on technology innovation and product development, exploring emerging technologies, driving technical vision, leading prototyping efforts, and collaborating externally to enhance the organisation’s products or services. While both roles are essential, CIOs are primarily concerned with internal IT operations, while CTOs focus on technological advancement, product innovation, and external partnerships to maintain the organisation’s competitive edge.

In 2017, I’ve written a post “What CEOs are looking for in their CIO” after an inspirational presentation by Simon La Fosse, CEO of Le Fosse Associates, a specialist technology executive search and head-hunter with more than 30 years experience in the recruitment market. The blog post was really well received on LinkedIn resulting in an influencer badge. In this post I am focussing on the role of the CTO (Chief Technology Officer).

In this digital age and ever-evolving landscape of the corporate world, the role of CTO stands as a linchpin for innovation, efficiency, and strategic progress. As businesses traverse the digital frontier, the significance of a visionary and adept CTO cannot be overstated. Delving deeper into the psyche of CEOs, let’s explore, in extensive detail, the intricate tapestry of qualities, skills, and expertise they ardently seek in their technology leaders.

1. Visionary Leadership:

CEOs yearn for CTOs with the acumen to envision not just the immediate technological needs but also the future landscapes. A visionary CTO aligns intricate technological strategies with the overarching business vision, ensuring that every innovation, every line of code, propels the company towards a future brimming with possibilities.

2. Innovation and Creativity:

Innovation is not just a buzzword; it’s the lifeblood of any progressive company. CEOs pine for CTOs who can infuse innovation into the organisational DNA. Creative thinking coupled with technical know-how enables CTOs to anticipate industry shifts, explore cutting-edge technologies, and craft ingenious solutions that leapfrog competitors.

3. Strategic Thinking and Long-Term Planning:

Strategic thinking is the cornerstone of successful CTOs. CEOs crave technology leaders who possess the sagacity to foresee the long-term ramifications of their decisions. A forward-looking CTO formulates and executes comprehensive technology plans, meticulously aligned with the company’s growth and scalability objectives.

4. Profound Technical Proficiency:

The bedrock of a CTO’s role is their technical prowess. CEOs actively seek CTOs who possess not just a surface-level understanding but a profound mastery of diverse technologies. From software development methodologies to data analytics, cybersecurity to artificial intelligence, a comprehensive technical acumen is non-negotiable.

5. Inspirational Team Leadership and Collaboration:

Building and leading high-performance tech teams is an art. CEOs admire CTOs who inspire their teams to transcend boundaries, fostering a culture of collaboration, innovation, and mutual respect. Effective mentoring and leadership ensure that the collective genius of the team can be harnessed for groundbreaking achievements.

6. Exceptional Communication Skills:

CTOs are conduits between the intricate realm of technology and the broader organisational spectrum. CEOs value CTOs who possess exceptional communication skills, capable of articulating complex technical concepts in a manner comprehensible to both technical and non-technical stakeholders. Clear communication streamlines decision-making processes, ensuring alignment with broader corporate goals.

7. Problem-Solving Aptitude and Resilience:

In the face of adversity, CEOs rely on their CTOs to be nimble problem solvers. Whether it’s tackling technical challenges, optimising intricate processes, or mitigating risks, CTOs must exhibit not just resilience but creative problem-solving skills. The ability to navigate through complexities unearths opportunities in seemingly insurmountable situations.

8. Profound Business Acumen:

Understanding the business implications of technological decisions is paramount. CEOs appreciate CTOs who grasp the financial nuances of their choices. A judicious balance between innovation and fiscal responsibility ensures that technological advancements are not just visionary but also pragmatic, translating into tangible business growth.

9. Adaptive Learning and Technological Agility:

The pace of technological evolution is breathtaking. CEOs seek CTOs who are not just adaptive but proactive in their approach to learning. CTOs who stay ahead of the curve, continuously updating their knowledge, can position their companies as trailblazers in the ever-changing technological landscape.

10. Ethical Leadership and Social Responsibility:

In an era marked by digital ethics awareness, CEOs emphasise the importance of ethical leadership in technology. CTOs must uphold the highest ethical standards, ensuring data privacy, security, and the responsible use of technology. Social responsibility, in the form of sustainable practices and community engagement, adds an extra layer of appeal.

In conclusion, the modern CTO is not merely a technical expert; they are strategic partners who contribute significantly to the overall success of the organisation. By embodying these qualities, CTOs can not only meet but exceed the expectations of CEOs, driving their companies to new heights in the digital age.

Transformative IT: Lessons from “The Phoenix Project” on Embracing DevOps and Fostering Innovation

Synopsis

“The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win” is a book by Gene Kim, Kevin Behr, and George Spafford that uses a fictional narrative to explore the real-world challenges faced by IT departments in modern enterprises. The story follows Bill Palmer, an IT manager at Parts Unlimited, an auto parts company on the brink of collapse due to its outdated and inefficient IT infrastructure.

The book is structured around Bill’s journey as he is unexpectedly promoted to VP of IT Operations and tasked with salvaging a critical project, code-named The Phoenix Project, which is massively over budget and behind schedule. Through his efforts to save the project and the company, Bill is introduced to the principles of DevOps, a set of practices that aim to unify software development (Dev) and software operation (Ops).

As Bill navigates a series of crises, he learns from a mysterious mentor named Erik, who introduces him to the “Three Ways”: The principles of flow (making work move faster through the system), feedback (creating short feedback loops to learn and adapt), and continual learning and experimentation. These principles guide Bill and his team in transforming their IT department from a bottleneck into a competitive advantage for Parts Unlimited.

“The Phoenix Project” is not just a story about IT and DevOps, it’s a tale about leadership, collaboration, and the importance of aligning technology with business objectives. It’s praised for its insightful depiction of the challenges faced by IT professionals and for offering practical solutions through the lens of a compelling narrative. The book has become essential reading for anyone involved in IT management, software development, and organisational change.

Learnings

“The Phoenix Project” offers numerous key learnings and benefits for IT professionals, encapsulating valuable lessons in IT management, DevOps practices, and organizational culture. Here are some of the most significant takeaways:

  • The Importance of DevOps: The book illustrates how integrating development and operations teams can lead to more efficient and effective processes, emphasizing collaboration, automation, continuous delivery, and quick feedback loops.
  • The Three Ways:
    • The First Way focuses on the flow of work from Development to IT Operations to the customer, encouraging the streamlining of processes and reduction of bottlenecks.
    • The Second Way emphasizes the importance of feedback loops. Quick and effective feedback can help in early identification and resolution of issues, leading to improved quality and customer satisfaction.
    • The Third Way is about creating a culture of continual experimentation, learning, and taking risks. Encouraging continuous improvement and innovation can lead to better processes and products.
  • Understanding and Managing Work in Progress (WIP): Limiting the amount of work in progress can improve focus, speed up delivery times, and reduce burnout among team members.
  • Automation: Automating repetitive tasks can reduce errors, free up valuable resources, and speed up the delivery of software updates.
  • Breaking Down Silos: Encouraging collaboration and communication between different departments (not just IT and development) can lead to a more cohesive and agile organization.
  • Focus on the Value Stream: Identifying and focusing on the value stream, or the steps that directly contribute to delivering value to the customer, can help in prioritizing work and eliminating waste.
  • Leadership and Culture: The book underscores the critical role of leadership in driving change and fostering a culture that values continuous improvement, collaboration, and innovation.
  • Learning from Failures: Encouraging a culture where failures are seen as opportunities for learning and growth can help organizations innovate and improve continuously.

For IT professionals, “The Phoenix Project” is more than just a guide to implementing DevOps practices, it’s a manifesto for a cultural shift towards more agile, collaborative, and efficient IT management approaches. It offers insights into how IT can transform from a cost center to a strategic partner capable of delivering significant business value.

Cloud Provider Showdown: Unravelling Data, Analytics and Reporting Services for Medallion Architecture Lakehouse

Cloud Wars: A Deep Dive into Data, Analytics and Reporting Services for Medallion Architecture Lakehouse in AWS, Azure, and GCS

Introduction

Crafting a medallion architecture lakehouse demands precision and foresight. Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) emerge as juggernauts, each offering a rich tapestry of data and reporting services. This blog post delves into the intricacies of these offerings, unravelling the nuances that can influence your decision-making process for constructing a medallion architecture lakehouse that stands the test of time.

1. Understanding Medallion Architecture: Where Lakes and Warehouses Converge

Medallion architecture represents the pinnacle of data integration, harmonising the flexibility of data lakes with the analytical prowess of data warehouses, combined forming a lakehouse. By fusing these components seamlessly, organisations can facilitate efficient storage, processing, and analysis of vast and varied datasets, setting the stage for data-driven decision-making.

The medallion architecture is a data design pattern used to logically organise data in a lakehouse, with the goal of incrementally and progressively improving the structure and quality of data as it flows through each layer of the architecture. The architecture describes a series of data layers that denote the quality of data stored in the lakehouse. It is highly recommended, by Microsoft and Databricks, to take a multi-layered approach to building a single source of truth (golden source) for enterprise data products. This architecture guarantees atomicity, consistency, isolation, and durability as data passes through multiple layers of validations and transformations before being stored in a layout optimised for efficient analytics. The terms bronze (raw), silver (validated), and gold (enriched) describe the quality of the data in each of these layers. It is important to note that this medallion architecture does not replace other dimensional modelling techniques. Schemas and tables within each layer can take on a variety of forms and degrees of normalisation depending on the frequency and nature of data updates and the downstream use cases for the data.

2. Data Services

Amazon Web Services (AWS):

  • Storage:
    • Amazon S3: A scalable object storage service, ideal for storing and retrieving any amount of data.
  • ETL/ELT:
    • AWS Glue: An ETL service that automates the process of discovering, cataloguing, and transforming data.
  • Data Warehousing:
    • Amazon Redshift: A fully managed data warehousing service that makes it simple and cost-effective to analyse all your data using standard SQL and your existing Business Intelligence (BI) tools.

Microsoft Azure:

  • Storage:
    • Azure Blob Storage: A massively scalable object storage for unstructured data.
  • ETL/ELT:
    • Azure Data Factory: A cloud-based data integration service for orchestrating and automating data workflows.
  • Data Warehousing
    • Azure Synapse Analytics (formerly Azure SQL Data Warehouse): Integrates big data and data warehousing. It allows you to analyse both relational and non-relational data at petabyte-scale.

Google Cloud Platform (GCP):

  • Storage:
    • Google Cloud Storage: A unified object storage service with strong consistency and global scalability.
  • ETL/ELT:
    • Cloud Dataflow: A fully managed service for stream and batch processing.
  • Data Warehousing:
    • BigQuery: A fully-managed, serverless, and highly scalable data warehouse that enables super-fast SQL queries using the processing power of Google’s infrastructure.

3 . Analytics

Google Cloud Platform (GCP):

  • Dataproc: A fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters.
  • Dataflow: A fully managed service for stream and batch processing.
  • Bigtable: A NoSQL database service for large analytical and operational workloads.
  • Pub/Sub: A messaging service for event-driven systems and real-time analytics.

Microsoft Azure:

  • Azure Data Lake Analytics: Allows you to run big data analytics and provides integration with Azure Data Lake Storage.
  • Azure HDInsight: A cloud-based service that makes it easy to process big data using popular frameworks like Hadoop, Spark, Hive, and more.
  • Azure Databricks: An Apache Spark-based analytics platform that provides collaborative environment and tools for data scientists, engineers, and analysts.
  • Azure Stream Analytics: Helps in processing and analysing real-time streaming data.
  • Azure Synapse Analytics: An analytics service that brings together big data and data warehousing.

Amazon Web Services (AWS):

  • Amazon EMR (Elastic MapReduce): A cloud-native big data platform, allowing processing of vast amounts of data quickly and cost-effectively across resizable clusters of Amazon EC2 instances.
  • Amazon Kinesis: Helps in real-time processing of streaming data at scale.
  • Amazon Athena: A serverless, interactive analytics service that provides a simplified and flexible way to analyse petabytes of data where it lives in Amazon S3 using standard SQL expressions. 

4. Report Writing Services: Transforming Data into Insights

  • AWS QuickSight: A business intelligence service that allows creating interactive dashboards and reports.
  • Microsoft Power BI: A suite of business analytics tools for analysing data and sharing insights.
  • Google Data Studio: A free and collaborative tool for creating interactive reports and dashboards.

5. Comparison Summary:

  • Storage: All three providers offer reliable and scalable storage solutions. AWS S3, Azure Blob Storage, and GCS provide similar functionalities for storing structured and unstructured data.
  • ETL/ELT: AWS Glue, Azure Data Factory, and Cloud Dataflow offer ETL/ELT capabilities, allowing you to transform and prepare data for analysis.
  • Data Warehousing: Amazon Redshift, Azure Synapse Analytics, and BigQuery are powerful data warehousing solutions that can handle large-scale analytics workloads.
  • Analytics: Azure, AWS, and GCP are leading cloud service providers, each offering a comprehensive suite of analytics services tailored to diverse data processing needs. The choice between them depends on specific project needs, existing infrastructure, and the level of expertise within the development team.
  • Report Writing: QuickSight, Power BI, and Data Studio offer intuitive interfaces for creating interactive reports and dashboards.
  • Integration: AWS, Azure, and GCS services can be integrated within their respective ecosystems, providing seamless connectivity and data flow between different components of the lakehouse architecture. Azure integrates well with other Microsoft services. AWS has a vast ecosystem and supports a wide variety of third-party integrations. GCP is known for its seamless integration with other Google services and tools.
  • Cost: Pricing models vary across providers and services. It’s essential to compare the costs based on your specific usage patterns and requirements. Each provider offers calculators to estimate costs.
  • Ease of Use: All three platforms offer user-friendly interfaces and APIs. The choice often depends on the specific needs of the project and the familiarity of the development team.
  • Scalability: All three platforms provide scalability options, allowing you to scale your resources up or down based on demand.
  • Performance: Performance can vary based on the specific service and configuration. It’s recommended to run benchmarks or tests based on your use case to determine the best-performing platform for your needs.

6. Decision-Making Factors: Integration, Cost, and Expertise

  • Integration: Evaluate how well the services integrate within their respective ecosystems. Seamless integration ensures efficient data flow and interoperability.
  • Cost Analysis: Conduct a detailed analysis of pricing structures based on storage, processing, and data transfer requirements. Consider potential scalability and growth factors in your evaluation.
  • Team Expertise: Assess your team’s proficiency with specific tools. Adequate training resources and community support are crucial for leveraging the full potential of chosen services.

Conclusion: Navigating the Cloud Maze for Medallion Architecture Excellence

Selecting the right combination of data and reporting services for your medallion architecture lakehouse is not a decision to be taken lightly. AWS, Azure, and GCP offer powerful solutions, each tailored to different organisational needs. By comprehensively evaluating your unique requirements against the strengths of these platforms, you can embark on your data management journey with confidence. Stay vigilant, adapt to innovations, and let your data flourish in the cloud – ushering in a new era of data-driven excellence.

Microsoft Fabric: Revolutionising Data Management in the Digital Age

In the ever-evolving landscape of data management, Microsoft Fabric emerges as a beacon of innovation, promising to redefine the way we approach data science, data analytics, data engineering, and data reporting. In this blog post, we will delve into the intricacies of Microsoft Fabric, exploring its transformative potential and the impact it is poised to make on the data industry.

Understanding Microsoft Fabric: A Paradigm Shift in Data Management

Seamless Integration of Data Sources
Microsoft Fabric serves as a unified platform that seamlessly integrates diverse data sources, erasing the boundaries between structured and unstructured data. This integration empowers data scientists, analysts, and engineers to access a comprehensive view of data, fostering more informed decision-making processes.

Advanced Data Processing Capabilities
Fabric boasts cutting-edge data processing capabilities, enabling real-time data analysis and complex computations. Its scalable architecture ensures that it can handle vast datasets with ease, paving the way for more sophisticated algorithms and in-depth analyses.

AI-Powered Insights
At the heart of Microsoft Fabric lies the power of artificial intelligence. By harnessing machine learning algorithms, Fabric identifies patterns, predicts trends, and provides actionable insights, allowing businesses to stay ahead of the curve and make data-driven decisions in real time.

Micosoft Fabric Experiences (Workloads) and Components

Microsoft Fabric, is the evolutionary next step in cloud data management, providing an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence – all in one place. Microsoft Fabric brings together new and existing components from Azure Power BI, Azure Synapse Analytics, and Azure Data Factory into a single integrated environment. These components are then presented in various customised user experiences or Fabric workloads (the compute layer) including Data Factory, Data Engineering, Data Warehousing, Data Science, Realtime Analytics and Power BI with OneLake as the storage layer.

  1. Data Factory: Combine the simplicity of Power Query with the scalability of Azure Data Factory. Utilize over 200 native connectors to seamlessly connect to on-premises and cloud data sources.
  2. Data Engineering: Experience seamless data transformation and democratization through our world-class Spark platform. Microsoft Fabric Spark integrates with Data Factory, allowing scheduling and orchestration of notebooks and Spark jobs, enabling large-scale data transformation and lakehouse democratization.
  3. Data Warehousing: Experience industry-leading SQL performance and scalability with our Data Warehouse. Separating compute from storage allows independent scaling of components. Data is natively stored in the open Delta Lake format.
  4. Data Science: Build, deploy, and operationalise machine learning models effortlessly within your Fabric experience. Integrated with Azure Machine Learning, it offers experiment tracking and model registry. Empower data scientists to enrich organisational data with predictions, enabling business analysts to integrate these insights into their reports, shifting from descriptive to predictive analytics.
  5. Real-Time Analytics: Handle observational data from diverse sources such as apps and IoT devices with ease. Real-Time Analytics, the ultimate engine for observatio nal data, excels in managing high-volume, semi-structured data like JSON or Text, providing unmatched analytics capabilities.
  6. Power BI: As the world’s leading Business Intelligence platform, Power BI grants intuitive access to all Fabric data. Empowering business owners to make informed decisions swiftly.
  7. OneLake: …the OneDrive for data. OneLake, catering to both professional and citizen developers, offers an open and versatile data storage solution. It supports a wide array of file types, structured or unstructured, storing them in delta parquet format atop Azure Data Lake Storage Gen2 (ADLS). All Fabric data, including data warehouses and lakehouses, automatically store their data in OneLake, simplifying the process for users who need not grapple with infrastructure complexities such as resource groups, RBAC, or Azure regions. Remarkably, it operates without requiring users to 1possess an Azure account. OneLake resolves the issue of scattered data silos by providing a unified storage system, ensuring effortless data discovery, sharing, and compliance with policies and security settings. Each workspace appears as a container within the storage account, and different data items are organized as folders under these containers. Furthermore, OneLake allows data to be accessed as a single ADLS storage account for the entire organization, fostering seamless connectivity across various domains without necessitating data movement. Additionally, users can effortlessly explore OneLake data using the OneLake file explorer for Windows, enabling convenient navigation, uploading, downloading, and modification of files, akin to familiar office tasks.
  8. Unified governance and security within Microsoft Fabric provide a comprehensive framework for managing data, ensuring compliance, and safeguarding sensitive information across the platform. It integrates robust governance policies, access controls, and security measures to create a unified and consistent approach. This unified governance enables seamless collaboration, data sharing, and compliance adherence while maintaining airtight security protocols. Through centralised management and standardised policies, Fabric ensures data integrity, privacy, and regulatory compliance, enhancing overall trust in the system. Users can confidently work with data, knowing that it is protected, compliant, and efficiently governed throughout its lifecycle within the Fabric environment.

Revolutionising Data Science: Unleashing the Power of Predictive Analytics

Microsoft Fabric’s advanced analytics capabilities empower data scientists to delve deeper into data. Its predictive analytics tools enable the creation of robust machine learning models, leading to more accurate forecasts and enhanced risk management strategies. With Fabric, data scientists can focus on refining models and deriving meaningful insights, rather than grappling with data integration challenges.

Transforming Data Analytics: From Descriptive to Prescriptive Analysis

Fabric’s intuitive analytics interface allows data analysts to transition from descriptive analytics to prescriptive analysis effortlessly. By identifying patterns and correlations in real time, analysts can offer actionable recommendations that drive business growth. With Fabric, businesses can optimize their operations, enhance customer experiences, and streamline decision-making processes based on comprehensive, up-to-the-minute data insights.

Empowering Data Engineering: Streamlining Complex Data Pipelines

Data engineers play a pivotal role in any data-driven organization. Microsoft Fabric simplifies their tasks by offering robust tools to streamline complex data pipelines. Its ETL (Extract, Transform, Load) capabilities automate data integration processes, ensuring data accuracy and consistency across the organization. This automation not only saves time but also reduces the risk of errors, making data engineering more efficient and reliable.

Elevating Data Reporting: Dynamic, Interactive, and Insightful Reports

Gone are the days of static, one-dimensional reports. With Microsoft Fabric, data reporting takes a quantum leap forward. Its interactive reporting features allow users to explore data dynamically, drilling down into specific metrics and dimensions. This interactivity enhances collaboration and enables stakeholders to gain a deeper understanding of the underlying data, fostering data-driven decision-making at all levels of the organization.

Conclusion: Embracing the Future of Data Management with Microsoft Fabric

In conclusion, Microsoft Fabric stands as a testament to Microsoft’s commitment to innovation in the realm of data management. By seamlessly integrating data sources, harnessing the power of AI, and providing advanced analytics and reporting capabilities, Fabric is set to revolutionize the way we perceive and utilise data. As businesses and organisations embrace Microsoft Fabric, they will find themselves at the forefront of the data revolution, equipped with the tools and insights needed to thrive in the digital age. The future of data management has arrived, and its name is Microsoft Fabric.

Unveiling the Magic of Data Warehousing: Understanding Dimensions, Facts, Warehouse Schemas and Analytics

Data has emerged as the most valuable asset for businesses. As companies gather vast amounts of data from various sources, the need for efficient storage, organisation, and analysis becomes paramount. This is where data warehouses come into play, acting as the backbone of advanced analytics and reporting. In this blog post, we’ll unravel the mystery behind data warehouses and explore the crucial roles played by dimensions and facts in organising data for insightful analytics and reporting.

Understanding Data Warehousing

At its core, a data warehouse is a specialised database optimised for the analysis and reporting of vast amounts of data. Unlike transactional databases, which are designed for quick data insertion and retrieval, data warehouses are tailored for complex queries and aggregations, making them ideal for business intelligence tasks.

Dimensions and Facts: The Building Blocks of Data Warehousing

To comprehend how data warehouses function, it’s essential to grasp the concepts of dimensions and facts. In the realm of data warehousing, a dimension is a descriptive attribute, often used for slicing and dicing the data. Dimensions are the categorical information that provides context to the data. For instance, in a sales context, dimensions could include products, customers, time, and geographic locations.

On the other hand, a fact is a numeric metric or measure that businesses want to analyse. It represents the data that needs to be aggregated, such as sales revenue, quantity sold, or profit margins. Facts are generally stored in the form of a numerical value and are surrounded by dimensions, giving them meaning and relevance.

The Role of Dimensions:

Dimensions act as the entry points to data warehouses, offering various perspectives for analysis. For instance, by analysing sales data, a business can gain insights into which products are popular in specific regions, which customer segments contribute the most revenue, or how sales performance varies over different time periods. Dimensions provide the necessary context to these analyses, making them more meaningful and actionable.

The Significance of Facts:

Facts, on the other hand, serve as the heartbeat of data warehouses. They encapsulate the key performance indicators (KPIs) that businesses track. Whether it’s total sales, customer engagement metrics, or inventory levels, facts provide the quantitative data that powers decision-making processes. By analysing facts over different dimensions, businesses can uncover trends, identify patterns, and make informed decisions to enhance their strategies.

Facts relating to Dimensions:

The relationship between facts and dimensions is often described as a fact table surrounded by one or more dimension tables. The fact table contains the measures or facts of interest, while the dimension tables contain the attributes or dimensions that provide context to the facts.

Ordering Data for Analytics and Reporting

Dimensions and facts work in harmony within data warehouses, allowing businesses to organise and store data in a way that is optimised for analytics and reporting. When data is organised using dimensions and facts, it becomes easier to create complex queries, generate meaningful reports, and derive valuable insights. Analysts can drill down into specific dimensions, compare different facts, and visualise data trends, enabling data-driven decision-making at all levels of the organisation.

Data Warehouse Schemas

Data warehouse schemas are essential blueprints that define how data is organised, stored, and accessed in a data warehouse. Each schema has its unique way of structuring data, catering to specific business requirements. Here, we’ll explore three common types of data warehouse schemas—star schema, snowflake schema, and galaxy schema—along with their uses, advantages, and disadvantages.

1. Star Schema:

Use:

  • Star schema is the simplest and most common type of data warehouse schema.
  • It consists of one or more fact tables referencing any number of dimension tables.
  • Fact tables store the quantitative data (facts), and dimension tables store descriptive data (dimensions).
  • Star schema is ideal for business scenarios where queries mainly focus on aggregations of data, such as summing sales by region or time.

Pros:

  • Simplicity: Star schema is straightforward and easy to understand and implement.
  • Performance: Due to its denormalised structure, queries generally perform well as there is minimal need for joining tables.
  • Flexibility: New dimensions can be added without altering existing structures, ensuring flexibility for future expansions.

Cons:

  • Redundancy: Denormalisation can lead to some data redundancy, which might impact storage efficiency.
  • Maintenance: While it’s easy to understand, maintaining data integrity can become challenging, especially if not properly managed.

2. Snowflake Schema:

Use:

  • Snowflake schema is an extension of the star schema, where dimension tables are normalised into multiple related tables.
  • This schema is suitable for situations where there is a need to save storage space and reduce data redundancy.
  • Snowflake schema is often chosen when dealing with hierarchical data or when integrating with existing normalised databases.

Pros:

  • Normalised Data: Reducing redundancy leads to a more normalised database, saving storage space.
  • Easier Maintenance: Updates and modifications in normalised tables are easier to manage without risking data anomalies.

Cons:

  • Complexity: Snowflake schema can be more complex to understand and design due to the increased number of related tables.
  • Performance: Query performance can be impacted due to the need for joining more tables compared to the star schema.

3. Galaxy Schema (Fact Constellation):

Use:

  • Galaxy schema, also known as fact constellation, involves multiple fact tables that share dimension tables.
  • This schema is suitable for complex business scenarios where different business processes have their own fact tables but share common dimensions.
  • Galaxy schema accommodates businesses with diverse operations and analytics needs.

Pros:

  • Flexibility: Allows for a high degree of flexibility in modelling complex business processes.
  • Comprehensive Analysis: Enables comprehensive analysis across various business processes without redundancy in dimension tables.

Cons:

  • Complex Queries: Writing complex queries involving multiple fact tables can be challenging and might affect performance.
  • Maintenance: Requires careful maintenance and data integrity checks, especially with shared dimensions.

Conclusion

Data warehousing, with its dimensions and facts, revolutionises the way businesses harness the power of data. By structuring and organising data in a meaningful manner, businesses can unlock the true potential of their information, paving the way for smarter strategies, improved operations, and enhanced customer experiences. As we move further into the era of data-driven decision-making, understanding the nuances of data warehousing and its components will undoubtedly remain a key differentiator for successful businesses in the digital age.

The choice of a data warehouse schema depends on the specific requirements of the business. The star schema offers simplicity and excellent query performance but may have some redundancy. The snowflake schema reduces redundancy and saves storage space but can be more complex to manage. The galaxy schema provides flexibility for businesses with diverse needs but requires careful maintenance. Understanding the use cases, advantages, and disadvantages of each schema is crucial for data architects and analysts to make informed decisions when designing a data warehouse tailored to the unique demands of their organisation.

Scrum of Scrums

The Scrum of Scrums is a scaled agile framework used to coordinate the work of multiple Scrum teams working on the same product or project. It is a meeting or a communication structure that allows teams to discuss their progress, identify dependencies, and address any challenges that may arise during the development process. The Scrum of Scrums is often employed in large organisations where a single Scrum team may not be sufficient to deliver a complex product or project.

The primary purpose of the Scrum of Scrums is to facilitate coordination and communication among multiple Scrum teams. It ensures that all teams are aligned towards common goals and are aware of each other’s progress.

Here are some key aspects of the Scrum of Scrums:

Frequency:

  • The frequency of Scrum of Scrums meetings depends on the project’s needs, but they are often daily or multiple times per week to ensure timely issue resolution.
  • Shorter daily meetings focussing on progress, next steps and blockers can be substantiated by a longer weekly meeting covering an agenda of all projects and more detailed discussions.

Participants – Scrum Teams and Representatives:

  • In a large-scale project or programme, there are multiple Scrum teams working on different aspects of the product or project.
  • Each Scrum team is represented by one or more members (often the Scrum Masters or team leads) in the Scrum of Scrums meeting. Each team selects one or more representatives to attend the Scrum of Scrums meeting.
  • These representatives are typically Scrum Masters or team leads who can effectively communicate the status, challenges, and dependencies of their respective teams.
  • The purpose of these representatives is to share information about their team’s progress, discuss impediments, and collaborate on solutions.

Meeting Structure & Agenda:

  • The Scrum of Scrums meeting follows a structured agenda that may include updates on team progress, identification of impediments, discussion of cross-team dependencies, reviewing and updating the overall RAID log with associated mitigation action progress and and collaborative problem-solving.
  • A key focus of the Scrum of Scrums is identifying and addressing cross-team dependencies. Teams discuss how their work may impact or be impacted by the work of other teams, and they collaboratively find solutions to minimise bottlenecks and define a overall critical path / timeline for the project delivery.

Tools and Techniques:

  • While the Scrum of Scrums is often conducted through face-to-face meetings, organisations may use various tools and techniques for virtual collaboration, especially if teams are distributed geographically. Video conferencing, collaboration platforms, and digital boards are common aids.

Focus on Coordination:

  • The primary goal of the Scrum of Scrums is to facilitate communication and coordination among the different Scrum teams.
  • Teams discuss their plans, commitments, and any issues they are facing. This helps in identifying dependencies and potential roadblocks early on.

Problem Solving:

  • If there are impediments or issues that cannot be resolved within individual teams, the Scrum of Scrums provides a forum for collaborative problem-solving.
  • The focus is on finding solutions that benefit the overall project, rather than just individual teams.

Scaling Agile:

  • The Scrum of Scrums is in line with the agile principles of adaptability and collaboration. It allows organisations to scale agile methodologies effectively by maintaining the iterative and incremental nature of Scrum while accommodating the complexities of larger projects.

Information Flow: & Sharing

  • The Scrum of Scrums ensures that information flows smoothly between teams, preventing silos of knowledge and promoting transparency across the organisation.
  • The Scrum of Scrums provides a platform for teams to discuss impediments that go beyond the scope of individual teams. It fosters a collaborative environment where teams work together to solve problems and remove obstacles that hinder overall progress.
  • Transparency is a key element of agile development, and the Scrum of Scrums promotes it by ensuring that information flows freely between teams. This helps prevent misunderstandings, duplication of effort, and ensures that everyone is aware of the overall project status.

Adaptability:

  • The Scrum of Scrums is adaptable to the specific needs and context of the organisation. It can be tailored based on the size of the project, the number of teams involved, and the nature of the work being undertaken.
  • In summary, the Scrum of Scrums is a crucial component in the toolkit of agile methodologies for large-scale projects. It fosters collaboration, communication, and problem-solving across multiple Scrum teams, ensuring that the benefits of agile development are retained even in complex and extensive projects.

In Summary, the Scrum of Scrums is a crucial component in the toolkit of agile methodologies for large-scale projects. It fosters collaboration, communication, and problem-solving across multiple Scrum teams, ensuring that the benefits of agile development are retained even in complex and extensive projects.

It’s important to note that the Scrum of Scrums is just one of several techniques used for scaling agile. Other frameworks like SAFe (Scaled Agile Framework), LeSS (Large-Scale Scrum), and Nexus also provide structures for coordinating the work of multiple teams. The choice of framework depends on the specific needs and context of the organisation.

The C-Suite

WHO they are, What the do, Why they exist, How they add value

In corporate leadership, the C-Suite stands as the command centre, where strategic decisions are made, and the future of the company is shaped. Comprising key executives with specialised roles, the C-Suite plays a crucial role in steering organisations towards success. In this blog post, we’ll delve into the world of the C-Suite, shedding light on the responsibilities and value each role brings to the table.

  1. CEO – Chief Executive Officer

The CEO, or Chief Executive Officer, is the captain of the ship, responsible for charting the company’s course and ensuring its overall success. The CEO sets the vision, mission, and strategy, providing leadership to the entire organisation. They are the ultimate decision-maker, accountable to the board of directors and stakeholders.

  1. CFO – Chief Financial Officer

The CFO, or Chief Financial Officer, is the financial maestro of the C-Suite. Tasked with overseeing the financial health of the organisation, the CFO manages budgets, financial planning, and investment strategies. They play a pivotal role in risk management, ensuring sustainable growth and profitability.

  1. COO – Chief Operating Officer

The COO, or Chief Operating Officer, is the executor of the CEO’s vision. Responsible for day-to-day operations, the COO ensures that the company’s processes and systems align with strategic goals. They focus on efficiency, productivity, and scalability, optimising internal functions for maximum performance.

  1. CIO – Chief Information Officer

In the digital age, the CIO, or Chief Information Officer, holds a critical role. Charged with managing the company’s technology infrastructure, the CIO ensures that information systems align with business objectives. They play a pivotal role in driving innovation and digital transformation.

  1. CHRO – Chief Human Resources Officer

The CHRO, or Chief Human Resources Officer, is the guardian of the company’s most valuable asset—its people. Responsible for talent acquisition, employee development, and creating a positive work culture, the CHRO plays a key role in shaping the organisation’s human capital strategy.

  1. CMO – Chief Marketing Officer

The CMO, or Chief Marketing Officer, is the storyteller-in-chief. Charged with building and promoting the company’s brand, the CMO develops marketing strategies to drive growth and customer engagement. They are instrumental in shaping the company’s public image and market positioning.

  1. CRO – Chief Revenue Officer

The CRO, or Chief Revenue Officer, is the architect of revenue streams. Focused on driving sales and revenue growth, the CRO collaborates with sales, marketing, and other departments to optimise customer acquisition and retention strategies.

  1. CTO – Chief Technology Officer

The CTO, or Chief Technology Officer, is the technology visionary. Tasked with leading technological innovation, the CTO develops and implements technology strategies that align with the company’s business goals. They often play a crucial role in product development and ensuring technological competitiveness.

  1. CLO – Chief Legal Officer

The CLO, or Chief Legal Officer, is the legal guardian of the organisation. Responsible for managing legal risks and ensuring compliance with laws and regulations, the CLO provides legal counsel to the executive team and oversees matters such as contracts, intellectual property, and litigation.

Summary – Cheat sheet

Conclusion

The C-Suite represents a powerhouse of expertise, each member contributing their unique skills to the overall success of the organisation. By understanding the roles and responsibilities of the CEO, CFO, COO, CIO, CHRO, CMO, CRO, CTO, and CLO, we gain insights into the intricate workings of corporate leadership. Together, these leaders form a cohesive unit, steering the ship through the complexities of the business world, adding significant value to the organisation and its stakeholders.

Embracing Fractional Technology Leadership Roles: Unlocking Business Potential

In today’s fast-paced and ever-evolving business landscape, companies are increasingly turning to fractional technology leadership roles to drive innovation, streamline operations, and maintain a competitive edge. But what exactly are these roles, and what benefits do they offer to organisations? Let’s explore.

What are Fractional Technology Leadership Roles?

Fractional technology leadership roles involve hiring experienced tech leaders on a part-time or contract basis to fulfil critical leadership functions without the full-time commitment. These roles can include fractional Chief Information Officers (CIOs), Chief Technology Officers (CTOs), and other senior IT positions. Unlike traditional full-time roles, fractional leaders provide their expertise for a fraction of the time and cost, offering flexibility and specialised knowledge tailored to specific business needs.

Benefits of Fractional Technology Leadership

  1. Cost-Effective Expertise
    • Budget-Friendly: Small and medium-sized enterprises (SMEs) often struggle with the high costs associated with full-time C-suite executives. Fractional leaders provide top-tier expertise at a fraction of the cost, making it financially feasible for businesses to access high-level strategic guidance.
    • No Long-Term Commitment: Companies can engage fractional leaders on a project basis or for a specified period, eliminating the financial burden of long-term employment contracts, benefits, and bonuses.
  2. Flexibility and Scalability
    • Adaptable Engagements: Businesses can scale the involvement of fractional leaders up or down based on project demands, budget constraints, and strategic priorities. This flexibility ensures that companies can adapt to changing market conditions without the rigidity of permanent roles.
    • Specialised Skills: Organisations can tap into a diverse pool of talent with specialised skills tailored to their current needs, whether it’s implementing a new technology, managing a digital transformation, or enhancing cybersecurity measures.
  3. Accelerated Innovation and Growth
    • Fresh Perspectives: Fractional leaders bring fresh ideas and perspectives from their diverse experiences across industries. This can foster innovation and help companies identify new opportunities for growth and improvement.
    • Immediate Impact: With their extensive experience, fractional technology leaders can hit the ground running, delivering immediate value and accelerating the pace of technology-driven initiatives.
  4. Reduced Risk
    • Expert Guidance: Navigating the complexities of technology implementation and digital transformation can be daunting. Fractional leaders provide expert guidance, reducing the risk of costly mistakes and ensuring that projects are executed efficiently and effectively.
    • Crisis Management: In times of crisis or technological disruption, fractional leaders can step in to provide stability, strategic direction, and crisis management expertise, helping businesses navigate challenges with confidence.
  5. Focus on Core Business Functions
    • Delegate Complex Tasks: By entrusting technology leadership to fractional experts, business owners and executives can focus on core business functions and strategic goals, knowing that their technology initiatives are in capable hands.
    • Enhanced Productivity: With dedicated fractional leaders managing tech projects, internal teams can operate more efficiently, leading to enhanced productivity and overall business performance.

Unlock Your Business Potential with renierbotha Ltd

Are you ready to drive innovation, streamline operations, and maintain a competitive edge in today’s dynamic business environment? Look no further than renierbotha Ltd for exceptional fractional technology leadership services.

At renierbotha Ltd, we specialise in providing top-tier technology leaders on a part-time or contract basis, delivering the expertise you need without the full-time commitment. Our experienced fractional CIOs, CTOs, and senior IT leaders bring fresh perspectives, specialised skills, and immediate impact to your organisation, ensuring your technology initiatives are executed efficiently and effectively.

Why Choose renierbotha Ltd?

  • Cost-Effective Expertise: Access high-level strategic guidance at a fraction of the cost.
  • Flexibility and Scalability: Adapt our services to your project demands and strategic priorities.
  • Accelerated Innovation: Benefit from fresh ideas and rapid implementation of technology-driven initiatives.
  • Reduced Risk: Navigate the complexities of technology with expert guidance and crisis management.
  • Enhanced Focus: Delegate complex tech tasks to us, allowing you to concentrate on your core business functions.

Take the Next Step

Don’t let the challenges of technology hold your business back. Partner with renierbotha Ltd and unlock the full potential of fractional technology leadership. Contact us today to discuss how our tailored services can help your organisation thrive.

Contact Us Now

Conclusion

Fractional technology leadership roles offer a compelling solution for businesses seeking high-level expertise without the financial and logistical challenges of full-time executive hires. By leveraging the flexibility, specialised skills, and strategic insights of fractional leaders, companies can drive innovation, accelerate growth, and navigate the complexities of today’s technology landscape with confidence.

Embrace the future of technology leadership and unlock your business’s potential with fractional technology roles.

Experience the future of technology leadership with renierbotha Ltd. Let’s drive your business forward together!

Agile Fixed Price Projects

The Agile fixed price is a contractual model agreed upon by suppliers and customers of IT projects that develop software using Agile methods. The model introduces an initial concept & scoping phase after which budget, due date, and the way of steering the scope within the framework is agreed upon. This differs from traditional fixed-price contracts in that fixed-price contracts usually require a detailed and exact description of the subject matter of the contract in advance.

Fixed price contracts are evil – this is what can often be heard from agilest. On the other hand, those contracts are reality which many agile teams have to face. But what if we try to embrace and manage it instead of fighting against it? How can a company execute this kind of contract using agile practices to achieve better results with lower risk? This article will try to answer those questions.

Fixed Price, Time and Scope

Fixed price contracts freeze three project factors at once – money, time and scope – but this should not be a problem for agile teams. In fact, time boxing is common agile practice. Limiting money simply makes time boxing work better.

A real problem with fixed price contracts is the scope, which is fixed in terms of what should exactly be built instead of how much should we build.

Why are clients so obsessed with fixing the scope? We understand that they want to know how much they will pay (who does not want to know that) and when they will get the product. The only thing they don’t know, even if they will not always admit it, is what exactly they want as the final product.

The reason for fixing the scope has its roots in:

  • Lack of trust between the contractors.
  • Lack of understanding about how the agile software development methodology and processes work.
  • Misunderstanding what the scope means.

Every fixed price contract has a companion document, the “Requirements Specification” or something similar. Most of the time, working in an Agile way, the business requirements are relatively light weight criptic notes captured on stickies or story boards and not in comprehensive Business Requirement Documents (BRDs) pre-approved by business before developemnt commences. Documented requirements tries to reduce the risk of forgetting something important and tries to set a common understanding of what should be done to provide an illusion of predictability of what the business actually wants and needs in the final product.

Key wrong assumptions in fixing the scope are:

  • The more detail we include in the requirements and scope definition up front, the better we understand each other.
  • Well-defined scope will prevent changes.
  • A fixed scope is needed to better estimate price and time.

Converting the Fixed Scope into Fixed Budget

In understanding that the main conflict between application of an agile mindset and a fixed price contract lies in the fixed scope, we can now focus on converting the fixed scope into the fixed budget.

A well defined scope is done by capturing business requirements in as many user stories instead of providing a detailed specification of requirements. These stories are built into a product backlog. The effort required to deliver each story is estimated using one of many story point techniques, like planning poker.

It is key to understand that a higher level of detail in software requirements specifications, means two completely different things for both parties within a contract. Software companies (vendors / supplier), responsible for developing applications, will usually focus on technical details while the company using the software (buying party / customer) is more user focussed and business outcome oriented.

In compiling specifications four key aspects are in play:

  • User stories is a way of expressing requirements, understandable for both suppliers and customers. The understanding builds trust and a sense of common vision. User stories are quick to write and quick to destroy, especially written on an index card. They are also feature oriented, so they can provide a good view on the real scope of a project, and we can compare them with each other in terms of size or effort.
  • Acceptance Ctiteria, captured for each user story, are a formalised list of requirements that ensires a user story is completed with all scenarios taken into account – it specifies the conditions under which a story is fulfilled.
  • Story points as a way of estimating stories, are units of measure for expressing an estimate of the overall effort required to fully implement a user story or other pieces of work on the product backlog. The team will access the effort to deliver a story against the acceptance criteria and in relation to other stories. Various proven estimation techniques can be adopted by the team for example effort can be expressed as a T-shirt size (i.e. Large, Medium, Small). To quantify the effort, each T-shirt size can be assigned a number of story points i.e. Large = 15 storypoints, Medium 5 storypoints and Small = 2 story points. (See also the section on Estimation below). The intention of using story points, instead of man hours, is to lower the risk of underestimating the scope because, story points in their nature are relative and focused on the whole scope or on a group of stories, while traditional estimation (usually done in man-hours) tries to analyse each product feature in isolation.
  • Definition of done is another way of building trust and common understanding about the process and all the future plans for the project. It’s usually the first time clients see user stories and while they may like the way the stories are written, it may not be so obvious what it means to implement a story. Development teams who confirm with the client their definition of done, in conjunction with the acceptance criteria with, illustrate that they know better what the client’s expectations are.Development on a story will be completed when the defenition of done is achieved. This supports better estimation. In addition on the client side, the definition of done, in conjunction with the accpetance criteria, sets the criteria for user story acceptance.

Using the above four aspects, will provide the building blocks to define the scope budget in story points. This story point budget and not the stories behind it, is the first thing that should be fixed in the contract.

This sets the stage for change.

While we have the scope budget fixed (in terms of story points) we still want to embrace change, the agile way. As we are progressing with the project delivery, and especially during backlog refinement, we have the tools (user stories and points) which we can use to compare one requirement with another. This allows us to refine stories and change requirements along the way within a defined story point budget limit. And if we can stay within that limit, we can also stay within the fixed price and time.

Before Estimation

The hardest part in preparing a fixed price contract is to define the price and schedule that will be fixed based on, in most cases, not so well-defined requirements but preferably a well defined scope.

How can you prepare the project team (customer & supplier) to provide the best possible initial estimation?

Educate: Meet with your client and describe the way you’re going to work. We need to tell what the stories are all about, how we are going to estimate them and what is the definition of done. We might even need to do that earlier, when preparing an offer for the client’s Request For Proposal (RFP). Explain the agile delivery mothodology and you will use it to derive the proposal.

Capture user stories: This can be arranged as a time-boxed sessions, usually not taking no more than 1 or 2 days. This is long enough to find most of the stories forming the product vision without falling into feature creep. At this point it is also very important to discuss the definition of done, acceptance criteria for stories, iterations and releases with the client.

We need to know:

  • The environment in which stories should be tested (like the number of browsers or mobile platforms, or operating systems)
  • What kind of documentation is required
  • Where should finished stories be deployed so that the client can take a look at them
  • What should the client do (i.e. take part in a demo session)
  • How often do we meet and who participates
  • etc.

This and probably many more project specific factors will affect the estimation and will set a common understanding about the expectations and quality on both sides. They will also make the estimation less optimistic as it often happens when only the technical aspects of story implementation are considered by the team.

Estimation

Having discussed with the client a set of stories and a definition of done, we can now start the estimation. This is a quite well-known part of the process. The most important activity here is to engage as many future team members as possible so that the estimation is done collectively. Techniques like planning poker are known to lower the risk of underestimation because of some particular team member’s point of view, especially if this team member is also the most experienced-one, which is usually the case when estimations are done be one person. It is also important that the stories are estimated by the people who will actually implement the system.

Apart from T-shirt sizes to expressed effort estiamtion, as mentioned under Story Points above, the Fibonacci-like scale (1, 2, 3, 5, 8, 13, 20, 40, 100) comes in handy for estimating stories in points. Relative estimation starts with finding a set of easiest or smallest stories. They will get 1 or 2 points as a base level for further estimation.

In fact during the initial estimation it is often hard to estimate stories using the lowest values like 1 or 2. The point is, the higher the estimation, the less we know about the story. This is also why estimating in points is easier at this early stage, because it is far easier to tell that a story A is 2x as complicated as story B than to tell that story A will take 25 man-hours to get it Done (remember the definition of done?) and the story B will take 54 hours.

This works well even if we choose 3 or 5 point stories as the base level and if we do that, then it will be easier to break them down into smaller stories later during the development phase. Beware however the stories of 20, 40 or 100 points. This kind of estimation suggests that we know nothing about what is to be implemented, so it should be discussed with the client here and now in a little more detail instead of just happily putting it in the contract.

The result of the estimation is a total number of story points describing the initial scope for a product to be built. This is the number that should be fixed in terms of scope for the contract, not the particular stories themselves.

Deriving the Price and Time

Total number of points estimated based on the initial set of stories does not give us the price and time directly. To translate story points into commercial monetory numbers we need to know more about the development team’s makeup described in the number of differently skilled resources within a team, and the team’s ability to delivery work which is expresessed in an agile KPI referred to as the Team Capacity and/or Velocity.

The team’s velocity refers to the pace, expressed in story points per development cycle or sprint, at which a team can deliver work. The team’s capacity is defined by the average number of story points the team can deliver within a development cycle or sprint. An increase in the velocity, as a result of increased efficiency and higher productivity, will over time increase the teams capacity. Understandably, changing the makeup of the team will impact the team’s velocity/capacity. The team’s capacity and velocity is established through experience on previous projects the team delivered. A mature agile team is characterised by a stable and predictable velocity/capacity.

Let’s use a simple example to demonstrate how the team makeup and velocity are used to determine the project cost and time.

Assume we have:

  • Estimated our stories for a total of 300 story points.
  • The team makeup consists of 5 resources – 3 developers, 1 tester and a team leader.
  • Agile Scrum will be team’s delivery methodology.
  • Experience has shown this teams capacity / velocity is 30 story points over development cycle or sprint length of 2 weeks.

Determine the predicted Timeline

Time = <Points> / <Velocity> * <Sprint length>

Thus…

Time = 300 / 30 * 2 = 20 weeks (or 10 sprints)

Many factors during the project may affect the velocity, however if the team we’re working with is not new, and the project we’re doing is not a great unknown for us, then this number might be actually given based on some evidence and observations of the past.

Now we may be facing one of the two constraints that the client could want to impose on us in the contract:

  • The client wants the software as fast as we can do it (and preferably even faster)
  • The client wants as much as we can do by the date X (which is our business deadline)

If the calculated time is not acceptable then the only factor we can change is the team’s velocity. To do that we need to change the teams makeup and extend the team, however this is not working in a linear way i.e. doubling the team size will nor necessarily double its velocity but it should increase the velocity as the team should be able to do more work within a development cycle.

Determine the predicted Price

Calculating the price is based on the makeup of the team and the assocaited resource/skill set rate card (cost per hour).

The teams cost per sprint is calculated by the % of time or number of hours each reasurce will spend on the project within a sprint.

For our eaxmple, let assume:

  • A Sprint duration of 2 weeks has 10 working days and working 8 hours per day = 80h per sprint.
  • Developer 1 will work 100% on the project at a rate of £100 per hour.
  • Developer 2 will work 50% of his time on the project at a rate of £80 per hour.
  • Developer 3 will also work 100% on the project at a rate of £110 per hour.
  • The Team Leader will work 100% on the project at a rate of £150 per hour.
  • The Tester will be 100% on the project at £80 per hour.

The team cost per sprint (cps) will thus be…

Resource cost per sprint (cps) = <hours of resource per sprint> * <resource rate per hour>

  • Developer 1 cps = 80h * £100 = £8,000
  • Developer 2 cps = 40h (50% of 80h) * £80 = £3,200
  • Developer 3 cps = 80h * £110 = £8,800
  • Team Leader cps = 80h * £150 = £12,000
  • Tester cps = 80h * £80 = £6,400

Total team cost per sprint = (sum of the above) = £38,400 per sprint

Project predicted Price = <Number of sprints (from Predicted Timeline calculation)> * <Team cost per sprint>

Project predicted Price = 10 sprint * £38,400 per sprint = £384,000

So the Fix Price Contract Values are:

  • Price: £576,000
  • Time: 20 weeks (10 x 2 week sprints)
  • Scope: 300 Story Points

These simplistic calculations are of course just a part of the cost that will eventually get into the contract, but they are also the part that usually is the hardest to define. The way in which these cost are calculated also shows how delivering agile projects can be transferred into the contract negotiations environment.

Negotiating on Price

“So why is it so expensive?”, most customers ask.

This is where negotiations actually start.

The only factor a software company is able to change is its man-hour cost rate. It is the rate card that we are negotiating. Not the length of our iteration, not even the number of iterations. Developers, beyond popular believe, has no superhero powers and will not start working twice as fast just because it is negotiated this way. If we say we can be cheaper it is because we will earn less not because we will work faster.

The other factor that can influence the price is controlled by the customer – the scope.

Tracking Progress and Budget

Now that we have our contract signed, it is time to actually build the software within the agreed constraints of time and budget.

Delivering your fixed price project in an agile way is not a magic wand that will make all your problems disappear but it if measured correctly it will give you early visisbility. That is where the project metrics and more specific the burndown graphs come in to play. Early visibility provides you with the luxury of early corrective action ensuuring small problems do not turn into large expesive one’s.

One such a small mistake might be the team velocity used when the project price was calculated. Burndown charts are a very common way of tracking progress in many agile projects. It shows the predicted/forecasted completion of work rate (velocity) against the actual velocity to determine if the project is on track.

Figure 1 – Planned scope burndown vs. real progress.

They are good to visualize planned progress versus the reality. For example the burndown chart from Figure 1 looks quite good:

We are a little above the planned trend but it does not mean that we made a huge mistake when determining our velocity during the contract negotiations. Probably many teams would like their own chart to look like this. But the problem is that this chart shows only two out of three contract factors – scope ( presented as a percentage of story points) and time (sprints). So what about money?

Figure 2 – Scope burndown vs budget burndown.

The chart on Figure 2 shows two burndowns – scope and budget. Those two trends are expressed here as percentages for the purpose. There is no other way to compare those two quantities calculated (one in story points and the other in man-hours or money spent).

To track the scope and budget this way we need to:

  • Track the story points completed (done) in each iteration.
  • Track the real time spent (in man-hours) in each iteration.
  • Recalculate the total points in your project into 100% of our scope and draw a burndown based on percentage of the total scope.
  • Recalculate the budget fixed in the contract (or its part) into a total available man-hours – this is our 100% of a budget – and draw a budget burndown based on percentage of the total budget usedto date.

The second chart does not look promising. We are spending more money to stay on track than we expected. This is probably because of using some extra resources to actually achieve the expected team’s velocity. So having all three factors on one chart makes problems visible and iteration (sprint) 4 in this example is where we start to talk with the client and agree on mitigating actions, before it is too late.

Embracing Change

Agile embraces change, and what we want to do is to streamline change management within the fixed price contract. This has always been the hard part and it still is, but with a little help through changing the focus from the requirements analysis into some boundary limits early in the process, we want to welcome change at any stage of the project.

Remember earlier in the process changed fixed scope into fixed budget. The 300 story points from the example allows us to exchange the contents of the initial user story list without changing the number of story points. This is one of the most important aspects that we want to achieve with a fixed price contract done the agile way.

The difficulty here is to convince the client that stories can be exchanged because they should be comparable in the terms of effort required to complete them. So if at any point client has a new great idea that we can express as some new set of stories (worth for example 20 points) then it is again up to the client if we are going to remove stories worth 20 points from the end of our initial backlog to make a place for the new ones.

Or maybe the client wants to add another iteration (remember the velocity of 30 points per iteration?). It is quite easy to calculate the price of introducing those new stories, as we have already calculated the cost of a sprint.

What still is the most difficult in this kind of contracts is when we find out during the project that some stories will take longer than expected because they were estimated as epics and now we know more about them than we did at the beginning. But still this might not always be the case, because at the same time some stories will actually take less. So again tracking during the contract execution will provide valuable information. Talking about the problems earlier helps negotiating as we can talk about actions that need to be taken to prevent them instead of talking about the rescue plans after the huge and irreversible disaster.

Earning Mutual Trust

All the techniques discussed, require one thing to be actually used succesfully with a fixed price contract and that is – trust. But as we know, trust is not earned by describing but by actually doing. Use the Agile principles to demonstrate the doing, to show the progress and point out the problems early.

With every iteration we want to build value for the client. But what is more important, we focus on delivering the most valuable features first. So, the best way to build the trust of a client might be to divide the contract.

Start small, with some pilot development of 2 or 3 iterations (which will also be fixed price, but shorter). The software delivered must bring an expected value to the client. In fact it must contain some working parts of the key functionalities. The working software proves that you can build the rest. It also gives you the opportunity to verify the first assumptions about the velocity and eventually renegotiate the next part.

The time spent on the pilot development, should also be relatively small when compared to the scope left to be done. This way if our clients are not satisfied with the results, they can go away before it is too late and they no longer need to continue the contract and eventually fail the project.

Summary

Fixed price contracts are often considered very harmful and many agile adopters say that we should simply avoid them. But most of the time and as long as customers request them, they cannot be avoided, so we need to find ways to make them work for the goal, which is building quality software that can demonstrably increase business value propositions and competitive advantage.

I believe that some aspects of a fixed price agile contract are even good and healthy for agile teams, as it touches on the familiar while instilling commercial awareness. Development teams are used to working with delivery targets and business deadlines. That is exactly what the fixed date and price in the contract are – healthy time boxes and boundaries keeping us commercially aware and relevant.

Keep the focus on scope and you can still deliver your agile project within a fixed time and budget.

The intention of this article was not to suggest that agile is some ultimate remedy for solving the problem of fixed price contracts but to show that there are ways to work in this context the agile way.

Case Study: Renier Botha’s Transformational Work at BCA and Constellation Automotive Group

Overview

Renier Botha’s tenure at BCA (British Car Auctions), part of the Constellation Automotive Group, highlights his strategic and operational expertise in leveraging technology to enhance business functions. His initiatives have significantly influenced BCA’s financial and operational landscapes, aligning them with modern e-commerce and compliance frameworks.

Project Objectives

The overarching goal of Botha’s projects at BCA was to enable the financial teams with innovative and integrated cloud-based tools that automate and streamline financial operations and e-commerce. Key objectives included:

  • Enhancing expense management through cloud platforms.
  • Integrating diverse IT estates into a unified service offering.
  • Ensuring compliance with new tax legislation.
  • Streamlining vehicle documentation processes.
  • Improving operational efficiency through technology alignment.

Key Projects and Achievements

1. Deployment of Chrome River Expense Management

Botha managed the enterprise-wide deployment of the Chrome River Expense Management cloud platform. This initiative provided BCA’s financial teams with advanced tools to automate expense reporting and approvals, thereby reducing manual interventions and enhancing operational efficiency.

2. System Integration Strategy with MuleSoft

Under Botha’s guidance, BCA adopted MuleSoft as their API management, automation, and integration toolset. This critical move facilitated the integration of previously disconnected IT estates, creating a cohesive and efficient environment that supported robust service delivery across the organisation.

3. Making Tax Digital Project

Botha played a pivotal role in managing the delivery of the Making Tax Digital project, a key legislative requirement. His leadership ensured that BCA’s systems were fully compliant with new tax regulations, thereby avoiding potential legal and financial repercussions.

4. Vehicle Life Cycle Services Dashboard Project

Another significant achievement was the delivery of the Vehicle Life Cycle Services Dashboard replacement project. This was part of the preparation for an extensive ERP migration aimed at modernising the core operational systems.

5. Integration with VW Financial Services

Botha successfully implemented the integration of VW Financial Services and BCA finance estates. This project enabled the secure automation of vehicle documentation exchanges, which is crucial for maintaining data integrity and streamlining vehicle sales processes.

6. Portfolio Management Office Development

Finally, Botha supported the growth and maturity of BCA’s Portfolio Management Office. He introduced new working practices that aligned technology delivery with business operations, optimising efficiency and effectiveness across projects.

Impact and Outcomes

The initiatives led by Botha have transformed BCA’s financial and operational frameworks. Key impacts include:

  • Increased Operational Efficiency: Automated systems reduced manual workload, allowing staff to focus on more strategic tasks.
  • Enhanced Compliance and Security: Projects like Making Tax Digital and the integration with VW Financial Services ensured that BCA stayed compliant with legislative mandates and enhanced data security.
  • Improved Decision-Making: The new systems and integrations provided BCA’s management with real-time data and analytics, supporting better decision-making processes.

Conclusion

Renier Botha’s strategic vision and execution at BCA have significantly boosted the company’s technological capabilities, aligning them with modern business practices and legislative requirements. His work not only streamlined operations but also set a foundation for future innovations and improvements, demonstrating the critical role of integrated technology solutions in today’s automotive and financial sectors.

Case Study: Renier Botha’s Leadership in the Winning NHS Professionals Tender Bid for Beyond

Introduction

Renier Botha, a seasoned technology leader, spearheaded Beyond’s successful response to a Request for Proposal (RFP) from NHS Professionals (NHSP) for outsourced data services. This case study examines the strategic approaches, leadership, and technical expertise employed by Botha and his team in securing this critical project.

Context and Challenge

NHSP sought to outsource its data engineering services to enhance data science and reporting capabilities. The challenge was multifaceted, requiring a deep understanding of NHSP’s current data operations, stringent data governance and GDPR compliance, and the integration of advanced cloud technologies.

Strategy and Implementation

1. Stakeholder Engagement:
Botha led the initial stages by conducting key stakeholder interviews and meetings to gauge the current state and expectations. This hands-on approach ensured alignment between NHSP’s needs and Beyond’s proposal.

2. Gap Analysis:
By understanding the existing Data Engineering function, Botha identified inefficiencies and gaps. His team offered strategic recommendations for process improvements, directly addressing NHSP’s operational challenges.

3. Infrastructure Assessment:
Botha’s review of the current data processing systems uncovered dependencies that could impact future scalability and integration. This was crucial for designing a solution that was not only compliant with current standards but also adaptable to future technological advancements.

4. Data Governance Review:
Given the critical importance of data security in healthcare, Botha prioritised a thorough review of data governance practices, ensuring all proposed solutions were GDPR compliant.

5. Future State Architecture:
Utilising cloud technologies, Botha proposed a high-level architecture and design for NHSP’s future data estate. This included a blend of strategic and BAU tasks aimed at transforming NHSP’s data handling capabilities.

6. Team and Service Delivery Design:
Botha defined the composition of the Data Engineering team necessary to deliver on NHSP’s objectives. This included detailed job descriptions and a clear division of responsibilities, ensuring a match between team capabilities and service delivery goals.

7. KPIs and Service Levels:
Critical to the project’s success was the definition of KPIs and proposed service levels. Botha’s strategic vision included measurable outcomes to track progress and ensure accountability.

8. RFP Response and Roadmap:
Botha’s provided a detailed response to the RFP, outlining a clear and actionable data engineering roadmap for the first two years of service, broken down into six-month intervals. This detailed planning demonstrated a strong understanding of NHSP’s needs and showcased Beyond’s commitment to service excellence.

9. Technical Support:
Beyond also supported NHSP with system architecture queries, ensuring that all technical aspects were addressed comprehensively.

Results and Impact

Under Botha’s leadership, Beyond won the NHSP contract by effectively demonstrating a profound understanding of the project requirements and crafting a tailored, forward-thinking solution. The strategic approach not only aligned with NHSP’s operational goals but also positioned them for future scalability and innovation.

Conclusion

Botha’s expertise in data engineering and project management was pivotal in Beyond’s success. By meticulously planning and executing each phase of the RFP response, he not only led his team to a significant business win but also contributed to the advancement of data management practices within NHSP. This project serves as a benchmark in effective stakeholder management, strategic planning, and technical execution in the field of data engineering services.

Empowering Business Growth: The Strategic Integration of IT in Business Development and Sales Initiatives

Why IT should be involved in business development initiatives and new sales opportunities, from the very beginning.

In the dynamic landscape of modern business, the integration of Information Technology (IT) from the inception of business development and sales initiatives is not just a trend but a strategic necessity. This approach transforms IT from a mere support function to a driving force that shapes and propels business strategies. Let’s delve deeper into the reasons why involving IT from the outset is pivotal and explore the substantial benefits it brings to organisations:

Strategic Alignment and Innovation:

Early IT involvement ensures that technological strategies align seamlessly with business objectives. IT professionals, when engaged in the initial planning phases, can identify innovative solutions and technologies that can revolutionise products, services, and customer experiences.

Data-Driven Decision Making and Predictive Analytics:

IT experts excel in harnessing the power of data. By involving them early, businesses gain access to advanced analytics and predictive modeling. These capabilities empower data-driven decision-making, enabling businesses to anticipate market trends, customer preferences, and sales patterns.

Customer-Centric Solutions:

IT plays a pivotal role in creating customer-centric solutions. Through early involvement, businesses can leverage IT expertise to develop personalized interfaces, mobile apps, and e-commerce platforms tailored to customer needs. This customer-focused approach enhances user satisfaction and loyalty.

Operational Efficiency and Process Optimisation:

IT professionals optimise operational processes through automation, streamlining workflows, and integrating various systems. Early IT involvement ensures that business processes are designed with efficiency in mind, reducing manual errors and improving overall productivity.

Scalability and Flexibility:

Scalability is a cornerstone of successful businesses. IT architects systems that are scalable and flexible, allowing businesses to expand seamlessly. By involving IT early, companies can future-proof their solutions, saving costs in the long run and ensuring adaptability to market changes.

Cybersecurity and Compliance:

Security breaches can have devastating consequences. IT experts, when involved in the initial stages, design robust cybersecurity frameworks. They ensure compliance with industry regulations and standards, safeguarding sensitive data and building trust with customers and partners.

Collaborative Culture and Knowledge Sharing:

Early collaboration between IT, business development, and sales fosters a culture of open communication and knowledge sharing. Cross-functional teams collaborate on ideas and solutions, leading to holistic strategies that encompass technical and business aspects.

Continuous Improvement and Feedback Loops:

IT’s involvement from the beginning enables the establishment of feedback loops. Through continuous monitoring and analysis, businesses can gather insights, identify areas of improvement, and adapt strategies swiftly. This iterative approach drives continuous innovation and business growth.

In conlusion, the strategic integration of IT in business development and sales initiatives is a game-changer for organisations aiming to thrive in the digital age. By recognising IT as a core driver of business strategies, companies can harness innovation, enhance customer experiences, optimise operations, and ensure long-term success. Embracing this collaborative approach not only positions businesses as industry leaders but also fosters a culture of innovation and adaptability, crucial elements for sustained growth and competitiveness in today’s challenging business landscape.

Mastering the Art of Risk Management: Navigating Business Uncertainties

In the fast-paced realm of business, uncertainties are inevitable. From market fluctuations to unforeseen challenges, every venture encounters risks that can potentially impact its success. To mitigate these risks effectively, businesses employ a strategic approach called Risk Management. In this blog, we will explore how risks in business are identified, documented within a risk register, and assessed using a risk score matrix, ultimately ensuring a resilient and adaptive business model.

Identifying Risks: The Foundation of Risk Management

Identifying risks is the first crucial step in the risk management process. Businesses need to be vigilant in recognising potential threats that could hinder their objectives. Risks can stem from various sources such as financial instability, technological vulnerabilities, legal issues, or even natural disasters. Through thorough analysis and scenario planning, businesses can anticipate these risks and prepare proactive strategies.

Documenting Risks: The Risk Register

Once risks are identified, it is imperative to document them systematically. The tool commonly used for this purpose is a Risk Register. A Risk Register is a detailed document that compiles all identified risks, their potential impact, and the strategies devised to mitigate them. Each risk is carefully categorised, providing a comprehensive overview for stakeholders. This document serves as a roadmap for risk management efforts, enabling businesses to stay organised and focused on addressing potential challenges. The Risk Register should align with the RIsk components within the project RAID Logs. The Risk Register should also be covered as a standard agenda item for Board meetings.

Assessing Risks: The Risk Score Matrix

To prioritise risks within the Risk Register, businesses often employ a Risk Score Matrix. This matrix evaluates risks based on two essential factors: Likelihood and Severity.

  1. Likelihood: This factor assesses how probable it is for a specific risk to occur. Likelihood is usually categorised as rare, unlikely, possible, likely, or almost certain, each with a corresponding numerical value.
  2. Severity: Severity measures the potential impact a risk could have on the business if it materialises. Impact levels may range from insignificant to catastrophic, with corresponding numerical values.

Each of factors can rated on a scale from 1 to 5, where Likelihood and Severity respectively can be:

  • 1 – Rear / Negligible
  • 2 – Unlike / Minor
  • 3 – Posible / Moderate
  • 4 – Likely / Major
  • 5- Almost Certain / Catastrophic

By combining these two factors, a Risk Score is calculated for each identified risk. The formula typically used is:

Risk Score = Likelihood * Severity

This numerical value indicates the level of urgency in addressing the risk. The risk score can be color coded. An example of a risk score matrix is indicated below.

Risks with higher scores require immediate attention and robust mitigation strategies.

Effective Risk Mitigation Strategies

After assessing risks using the Risk Score Matrix, businesses can implement appropriate mitigation strategies. These strategies can include risk avoidance, risk reduction, risk transfer, or acceptance.

  1. Risk Avoidance: Involves altering business practices to sidestep the risk entirely. For instance, discontinuing a high-risk product or service.
  2. Risk Reduction: Implements measures to decrease the probability or impact of a risk. This might involve enhancing security systems or diversifying suppliers.
  3. Risk Transfer: Shifts the risk to another party, often through insurance or outsourcing. This strategy is common for risks that cannot be avoided but can be financially mitigated.
  4. Risk Acceptance: Acknowledges the risk and its potential consequences without taking specific actions. This approach is viable for low-impact risks or those with high mitigation costs.

Conclusion

In today’s volatile business environment, mastering the art of risk management is paramount. By diligently identifying risks, documenting them within a structured Risk Register, and assessing them using a Risk Score Matrix, businesses can navigate uncertainties with confidence. A proactive approach to risk management not only safeguards the business but also fosters resilience and adaptability, ensuring long-term success in an ever-changing market landscape. Remember, in the realm of business, preparation is the key to triumph over uncertainty.

The Significance of RAID Logs: Keeping Projects on Course

Navigating Projects with Precision: The In-Depth Guide to RAID Logs

In the intricate tapestry of project management, where uncertainties are the norm and challenges are the companions, a tool that stands out for its efficacy is the RAID log. Comprising Risks, Assumptions, Issues, and Dependencies, a RAID log is more than just a document; it is a strategic asset that can steer a project towards success. In this comprehensive guide, we will explore not only what a RAID log is and why it’s important but also how to compile and maintain it effectively.

Components of a RAID Log: A Closer Look

Risks: Risks in a project context are uncertainties that have the potential to impact project objectives, whether it’s the timeline, budget, or quality of deliverables. These can include technological challenges, market fluctuations, or even human factors like team dynamics.

Assumptions: Assumptions are the foundational beliefs upon which the project is built. These can encompass anything from customer behaviour patterns to market trends. If assumptions change, they can necessitate a reevaluation of the entire project strategy.

Issues: Issues are problems that have already surfaced during the course of the project. They can range from technical glitches to conflicts within the team. Addressing these in a timely manner prevents them from escalating and affecting project progress.

Dependencies: Dependencies highlight the relationships between different project tasks or elements. Understanding these dependencies is vital for proper project sequencing. For example, Task B might be dependent on Task A’s completion.

The Purpose Unveiled: Why RAID Logs are Indispensable

Centralised Information Hub: A RAID log serves as a central repository, offering a bird’s eye view of the project’s landscape. Having all crucial information in one place enhances project team visibility and coordination.

Proactive Risk Management: By identifying potential risks and uncertainties early, project managers can proactively develop strategies to mitigate these challenges. This anticipatory approach is key to project success.

Informed Decision Making: A well-maintained RAID log empowers project managers and stakeholders to make informed decisions. Whether it’s tweaking project timelines or reallocating resources, decisions are grounded in the reality of the project’s challenges and opportunities.

Transparent Communication: Transparency is the bedrock of effective project management. The RAID log fosters transparent communication among team members, stakeholders, and sponsors. It ensures that everyone is on the same page regarding the project’s progress and challenges.

Creating and Maintaining an Effective RAID Log: A Step-by-Step Approach

Compilation:

  • Identify Risks: Engage with the project team to identify potential risks. Brainstorming sessions and historical data analysis can help in foreseeing possible challenges.
  • Document Assumptions: List down all the assumptions made during the project planning phase. Regularly revisit these assumptions to ensure they are still valid.
  • Track Issues: Implement a robust issue tracking system. Regular team meetings and progress reports can help in identifying and documenting issues as they arise.
  • Map Dependencies: Work closely with team leads and subject matter experts to map out task dependencies accurately. Tools like Gantt charts can be invaluable in visualising these relationships.

Maintenance:

  • Regular Updates: The RAID log is not a one-time creation. It needs regular updates. Schedule periodic reviews to assess the status of identified risks, assumptions, issues, and dependencies.
  • Impact Assessment: Whenever a change request or an unexpected event occurs, assess its impact on the RAID log. New risks or dependencies may emerge, requiring immediate attention.
  • Stakeholder Engagement: Keep stakeholders informed about changes to the RAID log. Their input can provide valuable insights and ensure that all perspectives are considered.
  • Lessons Learned: After the project’s completion, analyse the RAID log retrospectively. Identify what risks materialised, which assumptions held, and how issues were resolved. These insights can be invaluable for future projects.

In conclusion, a well-compiled and meticulously maintained RAID log is a linchpin in the project manager’s toolkit. It encapsulates the essence of project uncertainties, providing a roadmap for navigating through challenges. By understanding the nuances of risks, assumptions, issues, and dependencies, and by actively managing this information, project managers can lead their teams with confidence, ensuring that projects not only survive but thrive in the face of complexity and change.

What makes a good Technical Specification Document

Writing a technical spec increases the chances of having a successful project, service, or feature that all stakeholders involved are satisfied with. It decreases the chances of something going horribly wrong during implementation and even after you’ve launched your product. 

As a software engineer, your primary role is to solve technical problems. Your first impulse may be to immediately jump straight into writing code. But that can be a terrible idea if you haven’t thought through your solution. 

You can think through difficult technical problems by writing a technical spec. Writing one can be frustrating if you feel like you’re not a good writer. You may even think that it’s an unnecessary chore. But writing a technical spec increases the chances of having a successful project, service, or feature that all stakeholders involved are satisfied with. It decreases the chances of something going horribly wrong during implementation and even after you’ve launched your product. 

Developing software solutions using the Agile delivery methodology, your technical specification document is a living document that will be continuously updated as you progressing through the development sprints and the specifics solution designs and associate technical specifications aspects are being confirmed. Initially the tech spec will be describing he the solution at a high level, making sure all requirements are addressed within the solution. As requirements changes through the delivery life-cycle or as the technical solution evolves to working acceptance, the technical specifications are updated accordingly. Every agile story describing a functional piece, will cover requirements, acceptance criteria, solution architecture and technical specification. All the specs are included in the evolving technical specification. At the end of a development project the technical specifications are a good reference point for ongoing improvement development and support.

What is a technical specification document?

A technical specification document outlines how you’re going to address a technical problem by designing and building a solution for it. It’s sometimes also referred to as a technical design document, a software design document, or an engineering design document. It’s often written by the engineer who will build the solution or be the point person during implementation, but for larger projects, it can be written by technical leads, project leads, or senior engineers. These documents show the engineer’s team and other stakeholders what the design, work involved, impact, and timeline of a feature, project, program, or service will be. 

Why is writing a technical spec important?

Technical specs have immense benefits to everyone involved in a project: the engineers who write them, the teams that use them, even the projects that are designed off of them. Here are some reasons why you should write one. 

Benefits to engineers

By writing a technical spec, engineers are forced to examine a problem before going straight into code, where they may overlook some aspect of the solution. When you break down, organize, and time box all the work you’ll have to do during the implementation, you get a better view of the scope of the solution. Technical specs, because they are a thorough view of the proposed solution, they also serve as documentation for the project, both for the implementation phase and after, to communicate your accomplishments on the project. 

With this well-thought out solution, your technical spec saves you from repeatedly explaining your design to multiple teammates and stakeholders. But nobody’s perfect;  your peers and more seasoned engineers may show you new things from them about design, new technologies, engineering practices, alternative solutions, etc. that you may not have come across or thought of before. They may catch exceptional cases of the solution that you may have neglected, reducing your liability. The more eyes you have on your spec, the better. 

Benefits to a team

A technical spec is a straightforward and efficient way to communicate project design ideas between a team and other stakeholders. The whole team can  collaboratively solve a problem and create a solution. As more teammates and stakeholders contribute to a spec, it makes them more invested in the project and encourages them to take ownership and responsibility for it. With everyone on the same page, it limits complications that may arise from overlapping work. Newer teammates unfamiliar with the project can onboard themselves and contribute to the implementation earlier.  

Benefits to a project

Investing in a technical spec ultimately results in a superior product.  Since the team is aligned and in agreement on what needs to be done through the spec, big projects can progress faster. A spec is essential in managing complexity and preventing scope and feature creep by setting project limits. It sets priorities thereby making sure that only the most impactful and urgent parts of a project go out first. 

Post implementation, it helps resolve problems that cropped up within the project, as well as provide insight in retrospectives and postmortems. The best planned specs serve as a great guide for measuring success and return on investment of engineering time. 

What to do before writing a technical spec

Gather the existing information in the problem domain before getting started. Read over any product/feature requirements that the product team has produced, as well as technical requirements/standards associated with the project. With this knowledge of the problem history, try to state the problem in detail and brainstorm all kinds of solutions you may think might resolve it. Pick the most reasonable solution out of all the options you have come up with. 

Remember that you aren’t alone in this task. Ask an experienced engineer who’s knowledgeable on the problem to be your sounding board. Invite them to a meeting and explain the problem and the solution you picked. Lay out your ideas and thought process and try to persuade them that your solution is the most appropriate. Gather their feedback and ask them to be a reviewer for your technical spec.

Finally, it’s time to actually write the spec. Block off time in your calendar to write the first draft of the technical spec. Usea collaborative document editor that your whole team has access to. Get a technical spec template (see below) and write a rough draft. 

Contents of a technical spec

There are a wide range of problems being solved by a vast number of companies today. Each organization is distinct and creates its own unique engineering culture. As a result, technical specs may not be standard even within companies, divisions, teams, and even among engineers on the same team. Every solution has different needs and you should tailor your technical spec based on the project. You do not need to include all the sections mentioned below. Select the sections that work for your design and forego the rest.

From my experience, there are seven essential parts of a technical spec: front matter, introduction, solutions, further considerations, success evaluation, work, deliberation, and end matter. 

1. Cover Page

  • Title 
  • Author(s)
  • Team
  • Reviewer(s)
  • Created on
  • Last updated
  • Epic, ticket, issue, or task tracker reference link

2. Introduction

2.1 Overview, Problem Description, Summary, or Abstract

  • Summary of the problem (from the perspective of the user), the context, suggested solution, and the stakeholders. 

2.2 Glossary  or Terminology

  • New terms you come across as you research your design or terms you may suspect your readers/stakeholders not to know.  

2.3 Context or Background

  • Reasons why the problem is worth solving
  • Origin of the problem
  • How the problem affects users and company goals
  • Past efforts made to solve the solution and why they were not effective
  • How the product relates to team goals, OKRs
  • How the solution fits into the overall product roadmap and strategy
  • How the solution fits into the technical strategy

2.4 Goals or Product and Technical Requirements

  • Product requirements in the form of user stories 
  • Technical requirements

 2.5 Non-Goals or Out of Scope

  • Product and technical requirements that will be disregarded

2.6 Future Goals

  • Product and technical requirements slated for a future time

2.7 Assumptions

  • Conditions and resources that need to be present and accessible for the solution to work as described. 

3. Solutions

3.1 Current or Existing Solution Design

  • Current solution description
  • Pros and cons of the current solution

3.2 Suggested or Proposed Solution Design 

  • External components that the solution will interact with and that it will alter
  • Dependencies of the current solution
  • Pros and cons of the proposed  solution 
  • Data Model or Schema Changes
    • Schema definitions
    • New data models
    • Modified data models
    • Data validation methods
  • Business Logic
    • API changes
    • Pseudocode
    • Flowcharts
    • Error states
    • Failure scenarios
    • Conditions that lead to errors and failures
    • Limitations
  • Presentation Layer
    • User requirements
    • UX changes
    • UI changes
    • Wireframes with descriptions
    • Links to UI/UX designer’s work
    • Mobile concerns
    • Web concerns
    • UI states
    • Error handling
  • Other questions to answer
    • How will the solution scale?
    • What are the limitations of the solution?
    • How will it recover in the event of a failure?
    • How will it cope with future requirements?

3.3 Test Plan

  • Explanations of how the tests will make sure user requirements are met
  • Unit tests
  • Integrations tests
  • QA

3.4 Monitoring and Alerting Plan 

  • Logging plan and tools
  • Monitoring plan and tools
  • Metrics to be used to measure health
  • How to ensure observability
  • Alerting plan and tools

3.5 Release / Roll-out and Deployment Plan

  • Deployment architecture 
  • Deployment environments
  • Phased roll-out plan e.g. using feature flags
  • Plan outlining how to communicate changes to the users, for example, with release notes

3.6 Rollback Plan

  • Detailed and specific liabilities 
  • Plan to reduce liabilities
  • Plan describing how to prevent other components, services, and systems from being affected

3.7 Alternate Solutions / Designs

  • Short summary statement for each alternative solution
  • Pros and cons for each alternative
  • Reasons why each solution couldn’t work 
  • Ways in which alternatives were inferior to the proposed solution
  • Migration plan to next best alternative in case the proposed solution falls through

4. Further Considerations

4.1 Impact on other teams

  • How will this increase the work of other people?

4.2 Third-party services and platforms considerations

  • Is it really worth it compared to building the service in-house?
  • What are some of the security and privacy concerns associated with the services/platforms?
  • How much will it cost?
  • How will it scale?
  • What possible future issues are anticipated? 

4.3 Cost analysis

  • What is the cost to run the solution per day?
  • What does it cost to roll it out? 

4.4 Security considerations

  • What are the potential threats?
  • How will they be mitigated?
  • How will the solution affect the security of other components, services, and systems?

4.5 Privacy considerations

  • Does the solution follow local laws and legal policies on data privacy?
  • How does the solution protect users’ data privacy?
  • What are some of the tradeoffs between personalization and privacy in the solution? 

4.6 Regional considerations

  • What is the impact of internationalization and localization on the solution?
  • What are the latency issues?
  • What are the legal concerns?
  • What is the state of service availability?
  • How will data transfer across regions be achieved and what are the concerns here? 

4.7 Accessibility considerations

  • How accessible is the solution?
  • What tools will you use to evaluate its accessibility? 

4.8 Operational considerations

  • Does this solution cause adverse aftereffects?
  • How will data be recovered in case of failure?
  • How will the solution recover in case of a failure?
  • How will operational costs be kept low while delivering increased value to the users? 

4.9 Risks

  • What risks are being undertaken with this solution?
  • Are there risks that once taken can’t be walked back?
  • What is the cost-benefit analysis of taking these risks? 

4.10 Support considerations

  • How will the support team get across information to users about common issues they may face while interacting with the changes?
  • How will we ensure that the users are satisfied with the solution and can interact with it with minimal support?
  • Who is responsible for the maintenance of the solution?
  • How will knowledge transfer be accomplished if the project owner is unavailable? 

5. Success Factors

5.1 Impact

  • Security impact
  • Performance impact
  • Cost impact
  • Impact on other components and services

5.2 Metrics

  • How will you measure success?
  • List of metrics to capture
  • Tools to capture and measure metrics

6. Work Execution

6.1 Work estimates and timelines

  • List of specific, measurable, and time-bound tasks
  • Resources needed to finish each task
  • Time estimates for how long each task needs to be completed

6.2 Prioritization

  • Categorization of tasks by urgency and impact

6.3 Milestones

  • Dated checkpoints when significant chunks of work will have been completed
  • Metrics to indicate the passing of the milestone

6.4 Future work

  • List of tasks that will be completed in the future

7. Deliberation

7.1 Points under Discussion or Dispute

  • Elements of the solution that members of the team do not agree on and need to be debated further to reach a consensus.

b. Open Questions and Issues

  • Questions about matters and issues you do not know the answers to or are unsure that you pose to the team and stakeholders for their input. These may include aspects of the problem you don’t know how to resolve yet. 

8. Relating Matters, References & Acknowledgements

8.1 Related Work

  • Any work external to the proposed solution that is similar to it in some way and is worked on by different teams. It’s important to know this to enable knowledge sharing between such teams when faced with related problems. 

8.2 References

  • Links to documents and resources that you used when coming up with your design and wish to credit. 

8.3 Acknowledgments

  • Credit people who have contributed to the design that you wish to recognize.

After you’ve written your technical spec

Now that you have a spec written, it’s time to refine it. Go through your draft as if you were an independent reviewer. Ask yourself what parts of the design are unclear and you are uncertain about. Modify your draft to include these issues. Review the draft a second time as if you were tasked to implement the design just based on the technical spec alone. Make sure the spec is a clear enough implementation guideline that the team can work on if you are unavailable. If you have doubts about the solution and would like to test it out just to make sure it works, create a simple prototype to prove your concept. 

When you’ve thoroughly reviewed it, send the draft out to your team and the stakeholders. Address all comments, questions, and suggestions as soon as possible. Set deadlines to do this for every issue. Schedule meetings to talk through issues that the team is divided on or is having unusually lengthy discussions about on the document. If the team fails to agree on an issue even after having in-person meetings to hash them out, make the final call on it as the buck stops with you. Request engineers on different teams to review your spec so you can get an outsider’s perspective which will enhance how it comes across to stakeholders not part of the team. Update the document with any changes in the design, schedule, work estimates, scope, etc. even during implementation.

Conclusion

Writing test specs can be an impactful way to guarantee that your project will be successful. A little planning and a little forethought can make the actual implementation of a project a whole lot easier.  

Be aware of scammers while using email…

Many businesses have had to adapt to new working practices because of the coronavirus (COVID-19) situation. This has often meant an increase in emails and more frequent calls with suppliers, customers, banks and other organisations.
Scammers have been taking advantage of this. Cases are on the increase where fraudsters are calling businesses pretending to be from their phone or internet provider, their bank or even a retailer. They’ll ask for payments, or for staff to download software that then gives them control of that staff member’s device. Some have even taken control of genuine email addresses and used them to request payments, making it more difficult to spot the signs of a scam.
With this in mind, it’s now even more important to have strong, clear processes in place for keeping data safe.

Can you spot a scam?
Even if you know all the hallmarks and what to look out for, with ever-more sophisticated ways to access your data, scams are getting harder to spot. If a fraudster called or emailed you or a member of staff pretending to be a known supplier, would you know it was a scam?
They might even contact a staff member pretending to be you. For example, how can you tell if this email’s genuine?

Put checks and processes in place
To help you and your staff spot fraudulent attempts, here are some tips on the checks and processes you should have in place. Remember – it’s good to have a healthy level of suspicion.

  • If you get an email out of the blue that asks you to click on a link or attachment, don’t do it – even if the sender seems familiar – and even if it appears to be coming from a known email address. Instead, contact the apparent sender using different details that you already know and trust to verify the request.
  • When someone calls unexpectedly, don’t give them any information like personal details, bank details or pins.
  • Never download any software onto your device if you’re asked to – fraudsters can use this to access your personal information, even your bank account. Instead, call them back on a known number to check they’re genuine.
  • You can also search for a number to see if a listed number you’ve been asked to call is genuine. Have a payment-checking process in place. For example, if you receive a request to update the bank details you have on file or get new bank details for a payment, confirm this by calling that person or organisation using details you already have, and not those provided in the request. You should also do this with requests from anyone within your own organisation.
  • Have security policies in place, such as having strong passwords, using an encrypted VPN (virtual private network) when working from home, and using an extra layer of authentication for email and payment processes (such as a unique code texted to your mobile) – and test these processes often.
  • Make sure you and all your staff, regardless of their role, are made aware of the checks and processes regularly.

Solution Design & Architecture (SD&A) – Consider this…

When it comes to the design and architecture of enterprise level software solutions, what comes to mind?

What is Solution Design & Architecture:

SolutionDesign and Architecture (SD&A) is an in-depth IT scoping and review process that bridges the gap between your current IT environments, technologies, and the customer and business needs in order to deliver maximum return-on-investment. A proper design and architecture document also documents the approach, methodology and required steps to deliver the solution.

SD&A are actually two distinct disciplines. Solution Architect’s, with a balanced mixed of technical and business skills, write up the technical design of an environment and work out how to achieve a solution from a technical perspective. Solution Designers put the solution together and price it up based from assistance from the architect.

A solutions architect needs significant people and process skills. They are often in front of management, trying to explain a complex problem in laymen’s terms. They have to find ways to say the same thing using different words for different types of audiences, and they also need to really understand the business’ processes in order to create a cohesive vision of a usable product.

Solution Architect focuses on: 

  • market opportunity
  • technology and requirements
  • business goals
  • budget
  • project timeline
  • resourcing
  • ROI
  • how technology can be used to solve a given business problem 
  • which framework, platform, or tech-stack can be used to create a solution 
  • how the application will look, what the modules will be, and how they interact with each other 
  • how things will scale for the future and how they will be maintained 
  • figuring out the risk in third-party frameworks/platforms 
  • finding a solution to a business problem

Here are some of the main responsibilities of a solutions architect:

Ultimately, the Solution Architect is responsible for the vision that underlies the solution and the execution of that vision into the solution.

  • Creates and leads the process of integrating IT systems for them to meet an organization’s requirements.
  • Conducts a system architecture evaluation and collaborates with project management and IT development teams to improve the architecture.
  • Evaluates project constraints to find alternatives, alleviate risks, and perform process re-engineering if required.
  • Updates stakeholders on the status of product development processes and budgets.
  • Notifies stakeholders about any issues connected to the architecture.
  • Fixes technical issues as they arise.
  • Analyses the business impact that certain technical choices may have on a client’s business processes.
  • Supervises and guides development teams.
  • Continuously researches emerging technologies and proposes changes to the existing architecture.

Solution Architecture Document:

The Solution Architecture provides an architectural description of a software solution and application. It describes the systems and it’s features based on the technical aspects, business goals, and integration points. It is intended to address a solution to the business needs and provides the foundation/map of the solution requirements driving the software build scope.

High level Benefits of Solution Architecture:

  • Builds a comprehensive delivery approach
  • Stakeholder alignment
  • Ensures a longer solution lifespan with the market
  • Ensures business ROI
  • Optimises the delivery scope and associated effectiveness
  • Easier and more organised implementation
  • Provides a good understanding of the overall development environment
  • Problems and associated solutions can be foreseen

Some aspects to consider:

When doing an enterprise level solution architecture, build and deployment, a few key aspects come to mind that should be build into the solution by design and not as an after thought…

  • Solution Architecture should a continuous part of the overall innovation delivery methodology – Solution Architecture is not a once-off exercise but is imbedded in the revolving SDLC. Cyclically evolve and deliver the solution with agility that can quickly adapt to business change with solution architecture forming the foundation (map and sanity check) before the next evolution cycle. Combine the best of several delivery methodologies to ensure optimum results in bringing the best innovation to revenue channels in the shortest possible timeframe. Read more on this subject here.
  • People – Ensure the right people with the appropriate knowledge, skills and abilities within the delivery team. Do not forget that people (users and customers) will use the system – not technologists.
  • Risk – as the solution architecture evolves, it will introduce technology and business risks that must be added to the project risk register and addressed to mitigation in accordance with the business risk appetite.
  • Choose the right software development tech stack that is well established and easily supported while scalable and powerful enough to deliver a feature rich solution that can be integrated into complex operational estates. Most tech-stacks has Solution Frameworks that outline key design options and decision when doing solution architecture. Choosing the right tech-stack is one of the most fundamental ways to future-proof the technology solution. You can read more on choosing the right tech stack here.
  • Modular approach – using a service oriented architecture (SOA) model to ensure the solution can be functionally scaled, up and down to align with feature required, by using independently functioning modules of macro and micro-services. Each service must be clearly defined with input, process, output parameters that aligns with the integration standard established for the platform. This SOA also assist in overall information security enhancements and fault finding in case something goes wrong. It also makes the developed platform more agile to adapt to continuous business environment and market changes with less overall impact and system changes.
  • Customer data at the heart of a solution – Be clear on Master vs Slave customer and data records and ensure the needed integration between master and slave data within inter-connecting systems and platforms, with the needed security applied to ensure privacy and data integrity. Establish a Single Customer and Data Views (single version of the truth) from the design off-set. Ensure personal identifiable data is handled within the solution according to the regulations as outlined in the Data Protection Act and recently introduced GDPR and data anonymisation and retention policy guidelines.
  • Platform Hosting & Infrastructure – What is the intended hosting framework, will it by private or public cloud, running in AWS or Azure – all important decisions that can drastically impact the solution architecture.
  • Scalability – who is the intended audience for the different modules and associated macro services within the solution – how many consecutive users, transactions, customer sessions, reports, dashboards, data imports & processing, data transfers, etc…? As required, ensure the solution architecture accommodate the capability for the system to monitor usage and automatically scale horizontally (more processing/data (hardware) nodes running in parallel without dropping user sessions) and vertically (adding more power to a hardware node).
  • Information and Cyber Security – A tiered architecture ensure physical differentiation between user and customer facing interfaces, system logic and processing algorithms and the storage components of a solution. Various security precautions, guidelines and best practices should be imbedded within the software development by design. This should be articulated within the solution architecture, infrastructure and service software code. Penetration Testing and the associated platform hardening requirements should feed back into the solution architecture enhancement as required.
  • Identity Management – Single Sign On (SSO) user management and application roles to assign access to different modules, features and functionality to user groups and individuals.
  • Integration – data exchange, multi-channel user interface, compute and storage components of the platform, how the different components inter-connects through secure connection with each other, other applications and systems (API and gateway) within the business operations estate and to external systems.
  • Customer Centric & Business Readiness – from a customer and end-user perspective what’s needed to ensure easy adoption (familiarity) and business ramp-up to establish a competent level of efficiency before the solution is deployed and go-live. UX, UI, UAT, Automated Regression Testing, Training Material, FAQs, Communication, etc…
  • Enterprise deployment – Involvement of all IT and business disciplines i.e. Business readiness (covered above), Network, Compute, Cyber Security, DevOps. Make sure non-functional Dev-Ops related requirements are covered in the same manner as
  • Application Support – Involve the support team during product build to ensure they have input and understanding of the solution to provide SLA driven support at to business and IT operations when the solution goes live. 
  • Business Continuity – what is required from an IT infrastructure and platform/solution capability perspective to ensure the system is always available (online) to enable continuous business operations?

Speak to Renier about your solution architecture requirements. With more than 20 years of enterprise technology product development experience, we can support your team toward delivery excellence.

Also Read:

12 Useful Psychological Hacks

#1. If you want to know about something from someone , ask them a question and when they are done answering , keep silent and maintain an eye contact. They will tell you some more stuff, almost everything.

#2. When you try to convince someone over something, make sure they are sitting and you are standing. This makes them believe you sooner.

#3. The key to confidence is walking into a room and assume that everyone already likes you.

#4. Refer to people you’ve just met by their name. People love being referred to by their name and it will establish a sense of trust and friendship right away. Example: “Nice to meet you Alex. So, Alex how do you know John?” And continue to repeat name throughout the conversation.

#5. If someone is attracted to you, their eyes start blinking more than usual during a conversation with you.

#6. Spot the difference between a fake smile and a real one. You can find out if someone is smiling for real or faking it by looking at their eyes. Wrinkles form near eye corners when the smile is genuine.

#7. Pay attention to people’s feet. To know if someone is interested in a conversation look at their feet, if they are pointing towards you, they are. If they are pointing sideways or any other direction, they aren’t. Feet don’t lie.

#8. When at a party or a meeting. When at a party or a meeting, crack a joke and observe the people who are laughing around you. People who feel close to each other will be looking at each other. This is useful for discerning out friendships and other relationships.

#9. The life hack to make people do what you want them to do. Offer someone a choice instead of a command. For example, instead of saying drink your milk to a toddler, ask which mug would he/she like to drink milk from. This gives the person a sense of control hence produces a higher chance of a better outcome.

#10 How to win an argument?. If the person arguing loses his temper and starts shouting, natural human tendency is to shout back. DON’T! Stay calm and reply in silence. Try it! It works.

#11. Mirror people’s body language to build up trust. If you subtly mimic the body language of the person you’re talking to, you can effectively build up trust with them. By mirroring the way they speak and how they move they’ll like you more, because, to them, it will seem as if you are pretty good compatible.

#12 Inception To plant a seed of idea in someone’s mind, ask them to not think of a particular thing at all. Let’s say I ask you to NOT think about motorbikes. What are you thinking of?

RPA – Robotic Process Automation

Robotic process automation (RPA), also referred to as software robots, is a form of business process automation (BPA) – also now as Business Automation or Digital Transformation – where complex business processes are automated using technology enabled tools harnessing the power of Artificial intelligence (AI).

Robotic process automation (RPA) can be a fast, low-risk starting point for automating repettitive processes that depend on legacy systems. Software bots can pull data from these manually operated systems (most of the time without an API) into digital processes, ensuring faster and more efficient and accurate (less user error) outcomes. 

Workflow vs RPA

In traditional workflow automation tools, a system developer produces a list of actions/steps to automate a task and define the interface to the back-end system using either internal application programming interfaces (APIs) or dedicated scripting language. RPA systems, in contrast, compile the action list by watching the user perform that task in the application’s graphical user interface (GUI), and then perform the automation by repeating those tasks directly in the GUI, as if it is manually operated.

Automated Testing vs RPA

RPA tools have strong technical similarities to graphical user interface testing tools. Automated testing tools also automate interactions with the GUI by repeating a set of actions performed by a user. RPA tools differ from such systems in that they allow data to be handled in and between multiple applications, for instance, receiving email containing an invoice, extracting the data, and then typing that into a financial accounting system.

RPA Utilisation

Used the right way, though, RPA can be a useful tool in your digital transformation toolkit. Instead of wasting time on repetitive tasks, your people are freed up to focus on customers or subject expertise bringing product & services to market quicker and provide customer outcomes quickly – all adds up to real tangible business results.

Now, let’s be honest about what RPA doesn’t do – It does not transform your organisation by itself, and it’s not a fix for enterprise-wide broken processes and systems. For that, you’ll need digital process automation (DPA).

Gartner’s Magic Quadrant: RPA Tools

The RPA market is rapidly growing as incumbent vendors jockey for market position and evolve their offerings. In the second year of this Magic Quadrant, the bar has been raised for market viability, relevance, growth, revenue and how vendors set the vision for their RPA offerings in a fluid market.

Choosing the right RPA tool for your business is vital. The 16 vendors that made it into the 2020 Gartner report is marked in the appropriate quadrant below.

The Automation Journey

To stay in the race, you have to start fast. Robotic process automation (RPA) is non-invasive and lightning fast. You see value and make an immediate impact.

Part of the journey is not just making a good start with RPA implementations but to put the needed governance around this technology enabler. Make sure you can maintain the automated processes to quickly adapt to changes, integrate with new applications, align with continuously changing business processes while making sure that you can control the change and clearly communicate it to all needed audiences.

To ensure that you continuously monitor the RPA performance you must be able to measure success. Data gathered throughout the RPA journey and then converted through analytics into meaningful management information (MI). MI that enables quick and effective decisions – that’s how you finish the journey.

Some end-to-end RPA tools cover most of the above change management and business governance aspects – keep that in mind when selecting the right tool for your organisation.

So, do you want to stay ahead of your competition? Start by giving your employees robots that help them throughout the day.

Give your employees a robot

Imagine if, especially in the competitive and demanding times we live today, you could give back a few minutes of time of every employee’s day. You can if you free them from wrangling across systems and process siloes for information. How? Software robots that automate the desktop tasks that frustrate your people and slow them down. These bots collaborate with your employees to bridge systems and process siloes. They do work like tabbing, searching, and copying and pasting – so your people can focus on your customers.

RPA injects instant ROI into your business.

Also read:

Seven Coaching Questions

Question 1: “What’s on your mind?” 

A good opening line can make all the difference (just ask Charles Dickens, the Star Wars franchise, or any guy in a bar). The Kickstart Question starts fast and gets to the heart of the matter quickly. It cuts to what’s important while side stepping stale agendas and small talk. 

Question 2: “And what else?” 

The AWE Question keeps the flame of curiosity burning. “And what else?” may seem like three small words, but it’s actually the best coaching question in the world. That’s because someone’s first answer is never the only answer — and rarely the best answer. There are always more answers to be found and possibilities to be uncovered. Equally as important, it slows down the question asker’s “advice monster” — that part of every manager that wants to leap in, take over, and give advice/be an expert/solve the problem. 

Question 3: “What’s the real challenge here for you?” 

This is the Focus Question. It gets to the essence of the issue at hand. This question defuses the rush to action, which has many people in organizations busily and cleverly solving the wrong problems. This is the question to get you focused on solving the real problem, not just the firstproblem. 

The first three questions combine to form a powerful script for any coaching conversation, performance-review formal, or water-cooler casual. Start fast and strong, provide the opportunity for the conversation to deepen, and then bring things into focus with the next questions. 

Question 4: “What do you want?” 

This is the Foundation Question. It’s trickier than you think to answer, and many disagreements or dysfunctional relationships will untangle with this simple but difficult exchange: “Here’s what I want. What do you want?” It’s a basis for an adult relationship with those you work with, and a powerful way to understand what’s at the heart of things. 

Question 5: “How can I help?” 

It might come as a surprise that sometimes managers’ desire to be helpful can actually have a disempowering effect on the person being helped. This question counteracts that in two ways. First, it forces the other person to make a clear request, by pressing them to get clear on what it is they want or need help with. Second, the question works as a self-management tool to keep you curious and keep you lazy — it prevents you from leaping in and beginning things you think people want you to do. 

Question 6: “If you’re saying yes to this, what are you saying no to?” 

If you’re someone who feels compelled to say “yes” to every request or challenge, then this question is for you. Many of us feel overwhelmed and overcommitted; we’ve lost our focus and spread ourselves too thin. That’s why you need to ask this Strategic Question. A “yes” without an attendant “no” is an empty promise.

Question 7: “What was most useful for you?” 

Your closer is the Learning Question. It helps finish the conversation strong, rather than just fading away. Asking “What was most useful for you?” helps to reinforce learning and development. They identify the value in the conversation — something they’re likely to miss otherwise, and you get the bonus of useful feedback for your next conversation. You’re also framing every conversation with you as a useful one, something that will build and strengthen your reputation. 

From the book: The Coaching Habit: Say Less, Ask More & Change the Way Your Lead Forever

Digital Strategy & the Board

Digital Strategy is a plan that uses digital resources to achieve one or more objectives. With Technology changing at a very fast pace, Organisations have many digital resources to choose from.

Digital Resources can be defined as materials that have been conceived and created digitally or by converting analogue materials to a digital format for example:

  • Utilising the internet for commerce (web-shops, customer service portals, etc…)
  • Secure working for all employees from anywhere via VPN
  • Digital documents, scanning paper copies and submitting online correspondence to customers i.e. online statements and payment facilities via customer portals
  • Digital resources via Knowledge Base, Wiki, Intranet site and Websites
  • Automation – use digital solutions like robotics and AI to complete repetitive tasks more efficiently
  • Utilising social media for market awareness, customer engagement and advertising

A Digital Strategy is typically a plan that helps the business to transform it’s course of action, operations and activities into a digital nature by utilising available applicable technology.

Many directors know that digital strategies, and there related spending, can be difficult to understand. From blockchain and virtual reality to artificial intelligence, no business can afford to fall behind with the latest technological innovations that are redefining how businesses connect with their customers, employees, and myriad of other stakeholders. Read this post that covers “The Digital Transformation Necessity“…

As a Board Director what are the crucial factors that the Board should consider when building a digital strategy?

Here are five critical aspects, in more detail, and the crucial things to be conscious of when planning a digital transformation strategy as part of a board.

Stakeholders

A stakeholder, by definition, is usually an individual or a group impacted by the outcome of a project. While in previous roles you may have worked with stakeholders at senior management level, when planning a digital strategy, it’s important to remember that your stakeholders could also include customers, employees or anyone that could be affected by a new digital initiative.

Digital strategies work from the top down, if you’re looking to roll out a digital transformation project, you need to consider how it will affect every person inside or outside of your business.

Investment

Digital transformation almost always involves capital and technology-intensive investments. It is not uncommon for promising transformation projects to stall because of a lack of funds, or due to technology infrastructure that cannot cope with increased demands.

Starting a budgeting process right at the start of planning a digital transformation project is essential. This helps ensure that the scope of a project does not grow beyond the capabilities of an enterprise to fund it. A realistic budgeting and funding approach is crucial because a stalled transformation project creates disruption, confusion and brings little value to a business.

Communications

From the get-go, any digital strategy, regardless of size, should be founded on clear and constant communication between all stakeholders involved in a project. This ensures everyone is in the loop on the focus of the project, their specific roles within it, and which processes are going to change. In addition, continuous communication helps build a spirit of shared success and ensures everyone has the information they need to address any frustrations or challenges that may occur as time passes. When developing an effective communication plan, Ian’s advice is to hardly mention the word digital at all.

The best digital strategies explain what digital can do and also explain the outcomes. Successful communication around digital strategies uses language that everyone can understand, plain English, no buzzwords, no crazy acronyms and no silly speak.

Also read “Effective Leadership Communication” which covers how you can communicate effectively to ensure that everyone in the team are on the same page.

Technology

While there are many technologies currently seeing rapid growth and adoption, it doesn’t necessarily mean that you will need to implement all of them in your business. The choice of technology depends upon the process you are trying to optimise. Technology, as a matter of fact, is just a means to support your idea and the associated business processes.

People often get overwhelmed with modern technologies and try to implement all of them in their current business processes. The focus should be on finding the technologies that rightly fit your business objectives and implement them effectively.

Never assume that rolling out a piece of technology is just going to work. When embarking on a digital project, deciding what not to do is just as important as deciding what to do. Look at whether a piece of technology can actually add value to your business or if it’s just a passing trend. Each digital project should hence be presented to Board with a business case that outlines the business value, return on investment and the associated benefits and risks, for board consideration.

Measurement

No strategy is complete without a goal and a Digital Strategy is no different. To measure the effectiveness of your plan you will need to set up some key performance indicators (KPIs). These metrics will demonstrate the effectiveness of the plan and will also guide your future decision making. You will need to set up smart goals that have clear achievable figures along with a timeline. These goals will guide and optimise the entire execution of a transformation project and ensure that the team does not lose focus.

Any decent strategy should say where we are now, where we want to get to and how we’re going to get there, but also, more importantly, how are we going to monitor and track against our progress.

Also Read

Risk Management – for NEDs

Arguably the most significant adjustment to the NED role over the past seven years is that all NEDs must now be well versed in identifying and managing all forms of risk – operational, financial and reputational…

As a Chairman once described: “Risk is a massive issue now: You need to understand the risks and be clear about what the board is doing about mitigating those risk.”

So, how can you ensure that risks are being articulated appropriately and how can you probe into how risks are being mitigated, irrespective if risk management is well established within an industry or not? In the first part of this article I give some steer on how you can assess current risk management practises (governance) and the latter part covers some best practises.

Risk Maturity

If not already done within the company, you could do a Risk Maturity Assessment which gives an indication of the organisation’s engagement with risk management.

There are various models, usually with five levels of maturity (see the 5 Level Maturity Model in diagram below): from an immature Level 1 organisation where there are no formal risk management policies, processes or associated activities, tools or techniques, through a Level 2 managed organisation where policies are in place but risk reviews are generally reactive, all the ay up to the mature or ‘risk intelligent’ Level 5 enterprise where the risk management tone is set at the top and built into decision making, with risk management activities proactively embedded at all levels of the organisation.

Maturity - 5 Levels

     5 Level Maturity Model 

The outcome of such an assessment will give you clear indication of the risk management maturity level of the organisation. Dependant on how that aligns with the Shareholders’ and Board’s expected level, the needed change actions can be initiated to mature the organisation to the expected level. It will also give you measure of clarity of the rigour of process and review that is likely to have gone into the risk reporting that you see as a Board.

Risk Score/Rating Matrix

As risks are identified, logged in the Risk Register and then assessed based on likelihood of it happening and the impact to the business if it should happen, a Risk Scoring Matrix (with preferably a 5 point scal as per diagram below) is very useful to assign a Risk Score to each risk.

The higher the score the higher the priority of mitigating the risk should be.

RISK Matrix

Risk Score Matrix

As a NED you need to assess the completeness of the Key Risks in the Risk Register. Engaging with the executives prior board meetings goes a long way to get input and a feel for risks existing on the floor (day to day running/operations) of the business. You should also ask if there is something that you are talking about in every meeting that either is not on the risk register, or is rated as a low risk?  If that is true, then you need to explore why you are talking about it as a Board but management are not giving it greater focus.

Risk Heat Chart

A heat chart (as per diagram below) enables a holistic view of risks with high scoring risks in the top right (coloured red) corner and low risks in the bottom left corner (coloured green).Risk-HeatMap

   Risk Heat Map

For a board to get an overview of what the key risks are, I don’t think you can beat a heat chart.

As a NED, you can use this to sense check: Are the risks in the top quadrants, the Red Risks, the ones that the Board feel are the highest risk? Are you talking about these risks regularly and challenging the business on what mitigating actions they are doing to reduce them?

Approach on Risk Review

The popular parlance these days is a ‘deep dive’ into the highest risks, usually undertaken by the Audit Committee.

Apart from the “deep dive’ into risks usually undertaken by the Audit Committee you, as a NED, want to do your own exploring, below is an approached…

1. Current Risk Score

What is the justification for the current rating – does this feel right? The impact should be measured by the potential impact of the risk on strategic objectives, and is usually quite easy to define, but likelihood can be more subjective.

Also known as the mitigated risk rating, the current rating should recognise mitigations or controls that are already in place, and how effective these are.

2. Target Risk Score

What is a reasonable target risk rating for this risk, ie where are we trying to get to?

As a Board, you need to set the risk appetite (which equates to target risk ratings).  This may vary by the type of risk, for example, targeting a very low risk rating might be necessary on something that is a matter of compliance or safety, but in commercial matters, the trade-off between risk and reward needs to be considered, so a higher risk appetite is likely to be acceptable.

There won’t be a limitless budget to spend on mitigating every risk to a minimal level, so as a Board you will have to decide what level of risk you are comfortable with; and where the balance sits between reducing the risk and the cost of mitigation.  Why would you spend more on mitigations than the financial impact of the risk crystallising?

3. Mitigating actions

How are you going to get to your target level of risk?  Planned mitigating actions should drive the risk rating to its target level.  This is a focus area for audit committee deep dives – what actions are planned, and will they be sufficient to bring you to your target risk rating?  Progress on these actions should be monitored regularly – if no progress, ask if this risk being taken seriously enough? Or is it not as big a risk as you first thought?

Good risk management should aid decision making, avoid or minimise losses, but also identify opportunities.

Let’s look now into Risk Mitigation in more detail…

Approach on Risk Mitigation

Risk mitigation can be defined as taking steps to reduce adverse effects and impact to the business while reducing the likelihood of the risk.

There are four types of risk mitigation strategies that hold unique to Business Continuity and Disaster Recovery. When mitigating risk, it’s important to develop a strategy that closely relates to and matches your company’s risk profile.

four types of risk mitigation

Risk Acceptance

Risk acceptance does not reduce any effects however it is still considered a strategy. This strategy is a common option when the cost of other risk management options such as avoidance or limitation may outweigh the cost of the risk itself. A company that doesn’t want to spend a lot of money on avoiding risks that do not have a high possibility of occurring will use the risk acceptance strategy.

Risk Avoidance

Risk avoidance is the opposite of risk acceptance. It is the action that avoids any exposure to the risk whatsoever. It’s important to note that risk avoidance is usually the most expensive of all risk mitigation options.

Risk Limitation/Reduction

Risk limitation is the most common risk management strategy used by businesses. This strategy limits a company’s exposure by taking some action. It is a strategy employing a bit of risk acceptance along with a bit of risk avoidance or an average of both. An example of risk limitation would be a company accepting that a disk drive may fail and avoiding a long period of failure by having backups.

Risk Transference

Risk transference is the involvement of handing risk off to a willing third party. For example, numerous companies outsource certain operations such as customer service, payroll services, etc. This can be beneficial for a company if a transferred risk is not a core competency of that company. It can also be used so a company can focus more on their core competencies.

All of these four risk mitgiation strategies require montioring. Vigilence is needed so that you can recognize and interrperet changes to the impact of that risk.

 

Project Sponsorship

There are multiple aspects that contribute to a successful project, for example the right people, proper planning, governance, clear roles and responsibilities, but to mention a few. You could argue all equally important but one of the most important aspects that are often overlooked is the position of the Project Sponsor.

In my experience, the Sponsor holds one of the most important roles in terms of project success or failure. An involved sponsor who really is vested in the success of the project, will bring drive and energy to the project at a senior executive level – especially needed when the going gets tough.

The Project Sponsor takes ownership for the project goals, provides overall direction for the project and is the owner of the final product/deliverable.

Project Sponsor – Definition

In PRINCE2 it is not a defined role. PRINCE2 separately defines the “Project Executive” and the “Senior User” – two of the three core elements of the Project Board. For simpler projects these roles may well be combined and this then aligns closely with the general usage of the term Project Sponsor.

The APM Body of Knowledge characterises the Project Sponsor as the individual for whom the project is undertaken and who is the primary risk taker. The Sponsor is a member of the Steering Group which provides strategic direction and will include senior managers and, sometimes, key stakeholders.

The PMI PMBOK Guide talks about project sponsors and project initiators: the project initiator authorises the initiation of a project and the project sponsor provides the financial resources, in cash or in kind for the project. Again, these roles may often be assumed by a single individual.

Who can be a Project Sponsor

It is unusual for Project Sponsors to be full time project professionals. It is more likely that they are drawn from the management team of the business – perhaps as an interested “user”. For major projects it may be the CEO or CIO which assumes the role of Sponsor. It is preferable that the individual brings relevant experience and wields the authority and organisational ability to make things happen.

A sponsor needs to be:

  • a business leader and decision-maker with the credibility to work across corporate and functional boundaries;
  • an enthusiastic advocate of the work and the change it brings about;
  • prepared to commit time and support to the role;
  • sufficiently experienced in P3 to judge if the work is being managed effectively and to challenge P3 managers where appropriate.

Project Sponsor vs Other Project Roles

Project Sponsor vs. Project Owner

The project sponsor is a person.  The project owner is the organization that performs the project and receives its deliverables.  Normally the project sponsor is employed by the project owner organisation.

Project Sponsor vs. Project Manager

The project sponsor is one (and only one) level above the project manager.  While the project manager is responsible for the day to day operations of the project, the project sponsor seeks to promote the project to keep it high on the priority list, ensures the resources are in place to perform the project, and approves changes to the project.

Project Sponsor Project Manager
Day to Day management of project work No Yes
Project Deliverables Accepts Produces
Funding Approves Requests

The two main differences between project sponsorship and project management 

    1. Project sponsorship includes the identification and definition of the project whereas project management is concerned with delivering a project that is already defined, if only quite loosely.
    2. The project sponsor is responsible for the project’s business case and should not hesitate to recommend cancellation of the project if the business case no longer justifies the project.

Quick look at Other Project Roles:

    • Project Manager:  Responsible for the day to day project work, keeping the project on schedule and budget.  They report to the Project Sponsor.
    • Project Team:  The people who perform the technical project work and produce the deliverables.  They report to the project manager.
    • Customers/Users:  The people who use the project deliverables to improve their lives or work.  They are sometimes involved directly within the project in the form of focus groups or test subjects.
    • Vendors:  The people and organizations the project procures to provide products and/or services to fill technical gaps in the project team’s knowledge or ability, or to enhance the quality of the final product.
    • Business Partners:  The people or organizations that the project owner partners with to fulfill a specific role like installation, training or support.
    • Functional Managers:  The managers of technical groups (departments) within the owner organization, who often supply technical expertise to the project.
    • External Stakeholders:  Most project have stakeholders who are affected by the project, like government regulatory agencies, adjacent landowners, and the like.

Sponsor Responsibilities

The role of project sponsor is critical to ensuring the success of projects – therefore, when initiating a new project, you need to define the project sponsor taking into account the importance of project sponsorship. A project sponsor is to be involved from project initiation to project end. They represent the business side of the project.  They were probably involved when the project was being conceived and advocated for its inception before a project manager was assigned.

Further the sponsor is critical to strategic planning, high project sustainability, and successful implementation of project objectives. The role of project sponsor covers the financial and organizational responsibilities and activities that are directed to quick and decisive governance of the project.

The project sponsor is one, and only one, level above the project manager.  They do not manage the day to day operations of the project but they ensure the resources are in place, promote the project, and hold overall responsibility for the project’s success.

A good sponsor performs different functions during the project life cycle, serving as mentor, catalyst, motivator, barrier buster, and boundary manager. Most of the sponsor responsibilities are covered below:

  • The sponsor is the link between the project manager and senior managers, lead negotiations to gain and ensure stakeholder consensus.
  • Champion/Promotion: The project sponsor is the best ‘project seller’ that champions the project thought the business. The sponsor promotes and defends the project in front of all other stakeholders. They are the project champion that attempts to keep the project at the highest priority within the organisation.
  • Informing:  They receive project status updates from the project manager and disseminate the information to the relevant executives.
  • Project Charter:  This document officially creates the project and assigns the project manager.  It falls directly within the project sponsor’s responsibility.
  • Authorisation: They authorise the project and assign the project manager. They approve the project management plan and are kept aware of how the project is managed.
  • Scoping:  They are generally responsible for determining the initial project scope, although the project manager is ultimately responsible for the official project scope within the project management plan.
  • Goals: The Sponsor should ensure that the business need is valid and correctly prioritised within the project.
  • Communication: Clearly communicate on aspects of the project with stakeholder groups and senior management.
  • Keeping to Schedule: The Sponsor is heavily involved in ensuring that the project is kept to the original schedule along with the Project Manager. In order to manage the schedule the Sponsor and Project Manager should meet frequently and review the timeline.
  • Changes: A project can experience changes at any time. The Sponsor needs to ensure that these changes are properly managed to ensure that they don’t have any negative impact on the project.
  • Resolve Risks & Issues: Some issues are out of the reach of the Project Manager such as decisions on changes and conflicting objectives. The Sponsor takes control of these issues and ensures that they are solved efficiently and effectively.
  • Support: The Project Manager needs consistent support during a project. The Sponsor is on hand to provide this support in the form of mentoring, coaching and leadership. The Sponsor also supports the Project Team especially in terms of scope clarification, progress management and guidance.
  • Reporting: Assistance for the PM with appraisal and reporting.
  • Funding: They are responsible for negotiations to ensure funding is in place and approving changes to the project budget.
  • Leadership: Provide direction and guidance for project empowerment, key business strategies and project initiatives.
  • ROI & Benefits: As owner of the business case, the project sponsor is responsible for qualifying and overseeing the delivery of the benefits (the benefits realisation) as well as to identify project critical success factors and approve deliverables.
  • Identify members of Steering Committee and chair these Steerco meetings.
  • Involve stakeholders in the project and maintain their ongoing commitment to the project through using communication strategies and project management planning methods
  • Receiving:  Evaluate the project’s success on completion – The project sponsor receives the project deliverables from the project manager, approves them, and integrates them into the owner organization.

According to the Project Management Institute (PMI), the project sponsor role can be broken into three parts: vision, governance and value or benefits realization. They break those down in the following way:

Vision

    • Makes sure the business case is valid and in step with the business propositio
    • Aligns project with business strategy, goals and objective
    • Stays informed of project events to keep project viable
    • Defines the criteria for project success and how it fits with the overall business

Governance

    • Ensures project is properly launched and initiated
    • Maintains organizational priorities throughout project
    • Offers support for project organization
    • Defines project roles and reporting structure
    • Acts as an escalation point for issues when something is beyond the project manager’s control
    • Gets financial resources
    • Decision-maker for progress and phases of project

Values & Benefits

    • Makes sure that risks and changes are managed
    • Helps to ensure control and review processes
    • Oversees delivery of project value
    • Evaluates status and progress
    • Approves deliverables
    • Helps with decision-making
    • Responsible for project quality throughout project phases

Common reasons why the Sponsor lets down the project:

Many organisations invest heavily in project management training but are blind to the benefits of having project leaders who truly understand how projects differ from other management activities. Business are letting a project down if the sponsor:

    • is reassignment in the organisation, or distraction by other priorities.
    • is micro managing which can disrupt project manager confidence and authority.
    • fails to understand the project process and responsibilities.

The chances are that if an inappropriate project sponsor has been chosen,

    • the effectiveness of the role is reduced,
    • the project is not funded sufficiently,
    • and the overall success of the project is likely to turn into failure.

In fact, any project which is initiated without an appropriate degree of executive sponsorship (executive sponsor) stands an high likelihood of failure.

Sponsorship: project, programme or portfolio

Project

The role of the project sponsor starts before the appointment of the project manager. It continues beyond project closure and the departure of the project manager. So the sponsorship role covers the whole project life cycle.

The project sponsorship role will often be taken by the programme manager where the project is part of a programme.

Programme

The scale of programmes will often require a sponsor to be supported by a group of senior managers who perform some sponsorship duties. However, ultimate accountability will lie with the programme sponsor.

The programme manager should also be a competent project sponsor and will often perform that role for some, or all, of the programme’s component projects.

Portfolio

Sponsorship of a portfolio of projects and programmes will be undertaken by a senior executive with the necessary status, credibility and authority. This may well be a main board member, or even the CEO of the organisation. The scale of a portfolio will require an extensive governance organisation. This may involve, for example, committees with the responsibility for investment decisions or management of change.

What a Project Sponsor Does In Each Phase

While sometimes a project sponsor is clearly engaged from the start and other times they are nowhere to be seen, the best project sponsor is fully engaged with every phase of the project.

Initiation Duties

Project sponsors are instrumental in selecting the project manager during the initiation phase, and then they give that project manager a clear mandate, context for the project and set the level of their authority.

Also, during the project initiation, the project sponsor makes sure the project is appropriate for the organization, offering input on the project charter and participates in the kick-off meeting. The sponsor helps with the decision making during this phase.

Planning Duties

For the planning phase, the project sponsor is checking to make sure the project plan is realistic and feasible. This accounts for time restrictions and whether or not the team is tasked with expectations they cannot meet.

The project sponsor can help resolve issues, too, if they’re beyond the scope of the project manager. If there are other projects in play, the project sponsor is making sure they’re all working together and not against each other.

Implementation Duties

For the implementation and control phases, the project sponsor should work with the project manager, but not overstep boundaries. The project sponsor evaluates the project’s actual progress against what was planned and provides feedback to the project manager as necessary.

Sponsors also help the project manager and team work more autonomously to solve issues as they arise, while making sure that processes are being followed. They identify underlying factors that might cause problems and celebrate completion of milestones.

Closing Duties

During the closing phase, the project sponsor is part of the post-mortem evaluation on performance and other aspects of the project. They make sure that handoffs and signoffs are done properly. Project sponsors help facilitate the discussion that decides whether a project was a success or failure.

Overall, a project sponsor helps to streamline communications. They create trust and collaboration and keep problems from escalating. In terms of issues, they set up the instrument to identify problems with schedule, cost and quality. In that sense, they’re also in charge of making sure risk management is successful. Finally, they also encourage record-keeping for historical data storage.

Innovation Case Study: Test Automation & Ambit Enterprise Upgrade

A business case of how technology innovation successfully integrated into the business operations an improved the way of working that supported business success.

  
Areas of Science and TechnologyData Engineering, Computer Science
R&D Start DateDec 2018
R&D End DateSeptember 2019
Competent ProfessionalRenier Botha

 

Overview and Available Baseline Technologies

Within the scope of the project, the competent professionals sought to develop a regression testing framework aimed at testing the work carried out to upgrade the Ambit application[1] from a client service solution to a software as a service solution (SaaS) operating in the Cloud. The test framework developed is now used to define and support any testing initiatives across the Bank. The team also sought to automate the process, however this failed due to lack of existing infrastructure in the Bank. 

Initial attempts to achieve this by way of third-party solution providers, such as Qualitest, were unsuccessful, as these providers were unable to develop a framework or methodology which could be documented and reused across different projects. For this the team sought to develop the framework from the ground up. The project was successfully completed in September 2019. 

Technological Advances

The upgrade would enable access to the system via the internet, meaning users would no longer need a Cisco connection onto the specific servers to engage with the application. The upgrade would also enable the system to be accessed from devices other than a PC or laptop. Business Finance at Shawbrook is comprised of 14 different business units, with each unit having a different product which is captured and processed through Ambit. All the existing functionality, and business specific configuration needed to be transferred into the new Enterprise platform, as well as the migration of all the associated data. The competent professionals at Shawbrook sought to appreciably improve the current application through the following technological advances:

  • Development of an Automated Test Framework which could be used across different projects

Comprehensive, well executed testing is essential for mitigating risks to deployment. Shawbrook did not have a documented, standardised, and proven methodology that could be adopted by different projects to ensure that proper testing practises are incorporated into project delivery. There was a requirement to develop a test framework to plan, manage, govern and support testing across the agreed phases, using tools and practices that help mitigate risks in a cost-effective and commensurate way.

The test team sought to develop a continuous delivery framework, which could be used across all units within Business Finance. The Ambit Enterprise Upgrade was the first project at Shawbrook to adopt this framework, which lead to the development of a regression test pack and the subsequent successful delivery of the Ambit upgrade. The Ambit Enterprise project was the first project within the Bank which was delivered with no issues raised post release.

The development of a regression test pack which would enable automated testing of future changes or upgrades to the Ambit platform

Regression testing is a fundamental part of the software development lifecycle. With the increased popularity of the Agile development methodology, regression testing has taken on added importance. The team at Shawbrook sought to adopt an iterative, Agile approach to software development. 

A manual regression test pack was developed which could be used for future testing without the need for the involvement of business users. This was delivered over three test cycles with the team using the results of each cycle (bugs identified and resolved) to issue new releases. 

173 user paths were captured in the regression test pack, across 14 different divisions within Business Finance. 251 issues were found during testing, with some being within the Ambit application. Identifying and resolving these issues resulted in the advancement of Ambit Enterprise platform itself. This regression test pack can now be used for future changes to the Ambit Enterprise application, as well as future FIS[2] releases, change requests and enhancements, without being dependent on the business users to undertake UAT. The competent professionals at Shawbrook are currently using the regression test pack to test the integration functionality of the Ambit Enterprise platform.

  • Development of a costing tool to generate cost estimates for cloud test environment requirements

In order to resolve issues, solutions need to be tested within test environments. A lack of supply was identified within Shawbrook and there was an initiative to increase supply using the Azure cloud environment. The objective was to increase the capability within Business Finance to manage an Azure flexible hosting environment where necessary test environments could be set up on demand. There was also a requirement to plan and justify the expense of test environment management. The competent professionals sought to develop a costing tool, based on the Azure costing model, which could be used by project managers within Business Application Support (“BAS”) to quickly generate what the environment cost would be on a per day or per hour running basis. Costs were calculated based on the environment specification required and number of running hours required. Environment specification was classified as either “high”, “medium” or “low”. For example, the test environment specification required for a web server is low, an application server is medium while a database server is high. Shawbrook gained knowledge and increase its capability of the use of the Azure cloud environment and as a result are actively using the platform to undertake cloud-based testing.

The above constitutes an advance in knowledge and capability in the field of Data Engineering and Computer Science, as per sections 9 a) and c) of the BEIS Guidelines.

Technological Uncertainties and activities carried out to address them

The following technological uncertainties were encountered while developing the Ambit Enterprise upgrade, mainly pertaining to system uncertainty:

  • Implementation of the new Ambit Enterprise application could disrupt existing business processes

The biggest risks for the programme of change, was the potential disruption of existing business processes due to the implementation of the change without validation of the upgraded application against the existing functionality. This was the primary focus of the risk mitigation process for the project. Following the test phases set out in the test framework would enable a clear understanding of all the residual risks encountered approaching implementation, providing stakeholders with the context required to make a calculated judgement on these risks.

When an issue was identified through testing, a triage process was undertaken to categorise the issues as either a technical issue, or a user issue. User issues were further classified as “training” or “change of business process”. Technical issues were classified as “showstoppers”, “high”, “medium” and “low”. These were further categorised by priority as “must haves” and “won’t haves” in order to get well-defined acceptance criteria for the substantial list of bugs that arose from the testing cycles. In total, 251 technical issues were identified.

The acceptance criteria for the resolution of issues were:

  • A code fix was implemented
    • A business approved work around was implemented
    • The business accepted the risk

All showstoppers were resolved with either a code fix or and an acceptable work around. Configuration issues were within the remit of Shawbrook’s business application support (“BAS”) team to resolve, whilst other issues could only be resolved by the FIS development team. When the application went live, there were no issues raised post release, and all issues present were known and met the acceptance criteria of the business. 

  • Business processes may no longer align with the new web-based application

Since the project was an upgrade, there was the potential for operational impact of existing functionality due to differences between the Ambit client server solution, and the upgraded Ambit Enterprise web-based solution. The BAS team at Shawbrook were required to make changes to the business processes in order to align with the way the Ambit Enterprise solution now operated. Where Shawbrook specific issues could not be resolved through the configuration of the application with the business processes, changes were made to the functionality within Ambit, for example, additional plug-ins were developed for the Sales Portal platform to integrate with the Ambit Enterprise application. 

Because Ambit Enterprise was a web-based application, application and security vulnerabilities needed to be identified so that the correct security level was achieved. Because of this, performance and security testing, which was currently not being executed, needed to be introduced to the test framework. Performance testing also needed to be executed so that speed and stability requirements under the expected workloads were met.

Summary and Conclusions

The team at Shawbrook successfully developed a test framework which could be used across all projects within Business Finance. The development of the test framework lead to the generation of a regression test pack for the Ambit Enterprise upgrade. By undertaking these R&D activities, Shawbrook gained knowledge in the use of Azure Cloud Environment for testing, and increased its automated testing capabilities, enabling the transition to a continuous delivery framework whereby the majority of testing is automated.


[1] Ambit is the asset finance application operating within the business unit, 70-80 percent of transactions on all lending is captured and managed through Ambit

[2] FIS is the Ambit Enterprise vendor

Project Retrospective

These meetings go by many names – lessons learned, postmortems, retrospectives, after-action reviews, wrap-ups, project “success” meetings. Regardless of what you call them, they all have the same goal and follow the same basic pattern.

Retrospective – looking back on or dealing with past events or situations oran exhibition or compilation showing the development of work over a period of time.

An Agile retrospective is a meeting that’s held at the end of an iteration (sprint) in an agile project. During the retrospective, the team reflects on what happened in the iteration (sprint) and identifies actions for improvement going forward.

The Project Retrospective dedicates time to reviewing a completed project and learning from both the successes and the failures so the team and organisation can improve how they work going forward.

The general purpose is to allow the team, as a group, to evaluate its past working cycle. In addition, it’s an important moment to gather feedback on what went well and what did not.

Classic questions answered in these meetings:

  • What did we set out to do?
  • What actually happened?
  • Why did it happen?
  • What are we going to do next time?

 

“We do not learn from experience … we learn from reflecting on experience.”; John Dewey

Retrospectives give a team time to reflect on what they learned.

 

The Process

The process for debriefing a project covers roughly the same topics as the quick after-action discussion.

  1. Review the Project
  2. What went well and did not
  3. How can we do it better next-time

Review the project

Start by reviewing the project facts: goals, timeline, budget, major events, and success metrics.

In order to come up with useful ideas that everyone can agree on, the team needs a shared understanding of the facts and insight into the parts of the project in which they may not have been involved.

It’s important not to skip or rush through this step, especially for larger projects. People will arrive at the retrospective ready to discuss and solve problems, often assuming they know everything they need to know about what happened. This is rarely true.

If you are reviewing a project as a team, that means it took many people with unique experiences to get to that point. This step ensures everyone gets all the facts straight before they try to solve problems they may only partially understand.

What went well and what did not

This is the heart of the meeting. Everyone shares what they learned during the project: both the good and the bad.

In my opinion, this is the most fun and most challenging part of the meeting. As the meeting leader, you have an enormous impact on the success of your retrospective by deciding which questions you’ll ask and how the team shares their answers.

How can we do it better next-time

Real change is the ultimate measure of a retrospective’s success. To ensure that your retrospective results in something actually getting better, you’ll end the meeting by creating a specific action plan for improvements.

Top quotes on Change & Trust by Stephen Covey

7Habits-Covey

I’ve first read this book “7 Habits of Highly Effective People” in the 90’s – timeless inspiration!

 

 

 

 

  1. “There are three constants in life – change, choice and principles.”
  2. “Make time for planning; Wars are won in the general’s tent.”
  3. “Be proactive.” 
  4. “Begin with the end in mind.”
  5. “You have to decide what your highest priorities are and have the courage – pleasantly, smiling, nonapoloegetically – to say ‘no’ to other things. And the way to do that is by having a bigger ‘yes’ burning inside.”
  6. “Put first things first.”
  7. “Think win-win.”
  8. “Seek first to understand, and then to be understood.” 
  9. “Most people do not listen with the intent to understand. Most people listen with the intent to reply.”
  10. “If we keep doing what we’re doing, we’re going to keep getting what we’re getting.”
  11. “Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships.” 
  12. “Treat your employees exactly as you want them to treat your best customers.” 
  13. “The key is not to prioritise what’s on your schedule but to schedule your priorities.” Leadership is a choice, not a position.” 
  14. “I am not a product of my circumstances, I am a product of my decisions.” 
  15. “Strength lies in differences not in similarities.” 
  16. “Listen with your eyes for feelings.” 
  17. “The way we see the problem is the problem.” 
  18. “Most of us spend too much time on what is urgent and not enough time on what is important.” 
  19. “Accountability breeds response-ability.” 
  20. “Highly proactive people don’t blame circumstances, conditions or conditioning for their behaviour. Their behaviour of their own conscious choice.” 
  21. “Management is doing things right; leadership is doing the right things.” 
  22. “Be a light not a judge. Be a model not a choice. be part of the solution not part of the problem.” 
  23. “He who has a why can deal with any what or how.” Stephen Covey
  24. “Our ultimate freedom is the right and power anybody or anything outside ourselves will affect us.” 
  25. “The only thing that endures over time is the Law of the Farm. You must prepare the ground, plant the seed, cultivate, and water it if you expect to reap the harvest.”
  26. “A personal mission statement becomes the DNA for every other decision we make.” 
  27. “Courage is not the absence of fear but the awareness that something else is more important.” 
  28. “To achieve goals you’ve never achieved before you need to start doing things you’ve never done before.” 
  29. “Live out of your imagination, not your history.” 
  30. “Sow a thought, reap an action; sow an action, reap a habit; sow a habit, reap a character; sow a character, reap a destiny.” 
  31. “Every human has four endowments – self-awareness, conscience, independent will and creative imagination. These give us the ultimate human freedom. The power to choose, to respond, to change.” 
  32. “I teach people how to treat me by what I will allow.” 
  33. “Motivation is a fire from within. If someone else trie to light that fire under you, chances are it will burn very briefly.” 
  34. “You can change the fruit without changing the root.” 
  35. “Our character is basically a composite of our habits because they are consistent. Often unconscious patterns, they constantly, daily, express our character.” 
  36. “Be patient with yourself. Self-growth is tender; it’s holy ground. There’s no greater investment.” 
  37. “If I really want to improve my situation, I can work on the one thing over which I have control – myself.” 
  38. “Once you have a clear picture of your priorities that is values, goals, and high leverage activities, organise your life around them.”
  39. “What you do has greater impact than what you say.”

 

Also see quotes from Peter Drucker

Humans are smarter than any type of AI – for now…

Despite all the technological advancements, can machines today only achieve the first two of the thee AI objectives. AI capabilities are at least equalling and in most cases exceeding humans in capturing information and determining what is happening. When it comes to real understanding, machines still fall short – but for how long?

In the blog post, “Artificial Intelligence Capabilities”, we explored the three objectives of AI and its capabilities – to recap:

AI-8Capabilities

  • Capturing Information
    • 1. Image Recognition
    • 2. Speech Recognition
    • 3. Data Search
    • 4. Data Patterns
  • Determine what is happening
    • 5. Language Understanding
    • 6. Thought/Decision Process
    • 7. Prediction
  • Understand why it is happening
    • 8. Understanding

To execute these capabilities, AI are leaning heavily on three technology areas (enablers):

  • Data collecting devices i.e. mobile phones and IoT
  • Processing Power
  • Storage

AI rely on large amounts of data that requires storage and powerful processors to analyse data and calculate results through complex argorythms – resources that were very expensive until recent years. With technology enhancements in machine computing power following Moore’s law and the now mainstream availability of cloud computing & storage, in conjunction with the fact that there are more mobile phones on the planet than humans, really enabled AI to come to forefront of innovation.

AI_takes_over

AI at the forefront of Innovation – Here is some interesting facts to demonstrate this point:

  • Amazon uses machine learning systems to recommend products to customers on its e-commerce platform. AI help’s it determine which deals to offer and when, and influences many aspects of the business.
  • A PwC report estimates that AI will contribute $15.7 trillion to the global economy by 2030. AI will make products and services better, and it’s expected to boost GDP’S globally.
  • The self-driving car market is expected to be worth $127 billion worldwide by 2027. AI is at the heart of the technology to make this happen. NVIDIA created its own computer — the Drive PX Pegasus — specifically for driverless cars and powered by the company’s AI and GPUs. It starts shipping this year, and 25 automakers and tech companies have already placed orders.
  • Scientists believed that we are still years away from AI being able to win at the ancient game of Go, regarded as the most complex human game. Recently Google’s AI recently beat the world’s best Go player.

To date computer hardware followed a growth curve called Moore’s law, in which power and efficiency double every two years. Combine this with recent improvements in software algorithms and the growth is becoming more explosive. Some researchers expect artificial intelligence systems to be only one-tenth as smart as a human by 2035. Things may start to get a little awkward around 2060 when AI could start performing nearly all the tasks humans do — and doing them much better.

Using AI in your business

Artificial intelligence has so much potential across so many different industries, it can be hard for businesses, looking to profit from it, to know where to start.

By understanding the AI capabilities, this technology becomes more accessible to businesses who want to benefit from it. With this knowledge you can now take the next step:

  1. Knowing your business, identify the right AI capabilities to enhance and/or transform your business operations, products and/or services.
  2. Look at what AI vendors with a critical eye, understanding what AI capabilities are actually offered within their products.
  3. Understand the limitations of AI and be realistic if alternative solutions won’t be a better fit.

In a future post we’ll explore some real life examples of the AI capabilities in action.

 

Also read:

Ambit Enterprise Upgrade

Sep ’19 – The latest version of Ambit Enterprise software have been deployed for Shawbrook Bank’s Business Finance division.
As Programme Director, Renier was responsible for managing the integration and delivery, software development and implementation of the Enterprise version of Ambit for 15 specialist asset-finance business units and their associated product offerings.
Ambit Asset Finance Software meets its customer’s diverse set of requirements by not only bringing to market scalable, flexible, and industry-leading software solutions, but delivering and supporting these applications in fully managed and hosted environments.
Read more about the FIS Asset Manage Solution – Ambit, FIS Ambit Asset Finance Solution.
#Lead #Direct #ProjectManagement #AssetFinance

GANTT Charts

A Gantt chart is a horizontal bar chart developed as a production control tool in 1917 by Henry L. Gantt, an American engineer and social scientist. Frequently used in project management, a Gantt chart provides a graphical illustration of a schedule that helps to plan, coordinate, and track specific tasks in a project.

Gantt charts may be simple versions created on graph paper or more complex automated versions created using project management applications such as Microsoft Project or Excel.

A Gantt chart is constructed with a horizontal axis representing the total time span of the project, broken down into increments (for example, days, weeks, or months) and a vertical axis representing the tasks that make up the project (for example, if the project is outfitting your computer with new software, the major tasks involved might be: conduct research, choose software, install software). Horizontal bars of varying lengths represent the sequences, timing, and time span for each task. Using the same example, you would put “conduct research” at the top of the verticle axis and draw a bar on the graph that represents the amount of time you expect to spend on the research, and then enter the other tasks below the first one and representative bars at the points in time when you expect to undertake them. The bar spans may overlap, as, for example, you may conduct research and choose software during the same time span. As the project progresses, secondary bars, arrowheads, or darkened bars may be added to indicate completed tasks, or the portions of tasks that have been completed. A vertical line is used to represent the report date.

Gantt charts give a clear illustration of project status, but one problem with them is that they don’t indicate task dependencies – you cannot tell how one task falling behind schedule affects other tasks. The PERT Chart, another popular project management charting method, is designed to do this. Automated Gantt charts store more information about tasks, such as the individuals assigned to specific tasks, and notes about the procedures. They also offer the benefit of being easy to change, which is helpful. Charts may be adjusted frequently to reflect the actual status of project tasks as, almost inevitably, they diverge from the original plan.

Also Read…

Management Communication Plan

PERT Charts

A PERT chart is a project management tool used to schedule, organize, and coordinate tasks within a project. PERT stands for Program Evaluation Review Technique, a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine missile program. A similar methodology, the Critical Path Method (CPM) was developed for project management in the private sector at about the same time.

PERT Chart 1

A PERT chart presents a graphic illustration of a project as a network diagram consisting of numbered nodes (either circles or rectangles) representing events, or milestones in the project linked by labelled vectors (directional lines) representing tasks in the project. The direction of the arrows on the lines indicates the sequence of tasks. In the diagram, for example, the tasks between nodes 1, 2, 4, 8, 9 and 10 must be completed in sequence. These are called dependent or serial tasks. The tasks between nodes 2 and 3, and nodes 2 and 4 are not dependent on the completion of one to start the other and can be undertaken simultaneously. These tasks are called  parallel or concurrent tasks. Tasks that must be completed in sequence but that don’t require resources or completion time are considered to have event dependency. These are represented by dotted lines with arrows and are called dummy activities. For example, the dashed arrow linking nodes 6 and 9 indicates that the system files must be converted before the user test can take place, but that the resources and time required to prepare for the user test (writing the user manual and user training) are on another path. Numbers on the opposite sides of the vectors indicate the time allotted for the task.

The PERT chart is sometimes preferred over the Gantt Chart, another popular project management charting method, because it clearly illustrates task dependencies. On the other hand, the PERT chart can be much more difficult to interpret, especially on complex projects. Frequently, project managers use both techniques.

Also Read…

Management Communication Plan

Project Failure? How to Recover and/or Prevent…

Statistics indicate that 68% of all IT projects are bound to failure!

The PMI’s definition of a high-performing organisation, is a company that completes 80% or more projects on time, on budget, and meeting original goals. In a low-performing organization, only 60% or fewer projects hit the same marks.

Projects fail for all kinds of reasons:

  • Stakeholders can change their objectives
  • Key team members can leave for other companies
  • Budgets can disappear
  • Materials/Vendors can be delayed
  • Priorities can go un-managed
  • Running out of time
  • …and others

In this post:

> How to prevent project failure (with some statistics)

> How to recover a failing project

How to prevent project failure

Prevention is the best cure, so what can you do to prevent projects from failing? Here is some statistics…

  • Organisations that invest in proven project management practices waste 28 times less money because more of their strategic initiatives are completed successfully.
    Source: PMI’s Pulse of the Profession Survey, 2017.
  • 77% of high-performing organizations have actively-engaged project sponsors, while only 44% of low-performing organizations do.
    Source: PMI’s Pulse of the Profession Survey, 2017.
  • 46% of CIOs say that one of the main reasons IT projects fail is weak ownership.
    Source: The Harvey Nash/KPMG CIO Survey, 2017.
  • 33% of IT projects fail because senior management doesn’t get involved and requirements/scope change mid-way through the project.
    Source: A Replicated Survey of IT Software Project Failures by Khaled El Emam and A. Güneş Koru, 2008.
  • 78% of respondents feel that business is out of sync with project requirements and business stakeholders need to be more involved in the requirements process.
    Source: Doomed from the Start Industry Survey by Geneca, 2011.
  • 45% of the managers surveyed say business objectives are unclear to them.
    Source: Doomed from the Start Industry Survey by Geneca, 2011.
  • Companies that align their enterprise-wide PMO (project management office) to strategyhad 38% more projects meet original goals than those that did not. They also had 33% fewer projects deemed failures.
    Source: PMI’s Pulse of the Profession Survey, 2017.
  • 40% of CIOs say that some of the main reasons IT projects fail is an overly optimistic approach and unclear objectives.
    Souce: The Harvey Nash/KPMG CIO Survey, 2017.
  • Poor estimation during the planning phase continues to be the largest (32%) contributor to IT project failures.
    Source: PwC 15th Annual Global CEO Survey, 2012.
  • Projects with effective communication are almost twice as likely to successfully deliver project scope and meet quality standards than projects without effective communication (68% vs 32% and 66% vs 33%, respectively.)
    Source: PwC 15th Annual Global CEO Survey, 2012.

How to recover a failing project

These statistics show that the odds are not in your favour. It is inevitable that you will have to deal with a failing project or two, some time within your career… You could turn the odds in your favour by taking action in recovering failing projects.

Here are four steps you can use that could save a failing project — backed up by original research from GartneriSixSigmaPMI Project Zone CongressThe Institution of Engineering and Technology, and Government CIO MagazineFollow these four steps and salvage your failing project!

Step 1: Stop and Evaluate

Step 1 – Big action items:

  • Issue a “stop work” order
  • Talk with everyone

Metrics/Indicators: The right project Management Information (MI) should give you the needed early warning signs when things are not going according to plan and heading to failure. These signs should drive you to action, as rescuing a failing a project is not a task to be sneezed at. It takes planning, and the process can consume weeks of key resources time and effort.

People: To help ease the pain of stopping a project, work with the team members’ managers (resource owners) to identify and assign interim work. As people are your most important asset, it is important to keep them productively engaged while you are evaluating and re-planning your project recovery.

Project artefacts/deliverables: Make sure all the project artefacts and deliverables are safely stored where it cannot be tampered with for the interim period.

Communicate: (clear, concise, and concrete) – Communicate to your team why their project is on hold. Spend the needed time to learn as much as you can about each team member’s opinions of the project and of each other. Learning that their project will be put on hold will inevitably create distrust. Transparency and tailored messaging is the best way to mitigate bad feelings. See blog posts “Management Communication Plan” and  “Effective Leadership Communication

Project/Delivery Manager (You):Check your ego. Go to the major stakeholders and ask for anonymous feedback on their view of the overall project. When evaluating their responses, don’t forget to consider company culture and politics and how those factors may have played a role in forming the stakeholders’ opinions.

Step 2: Why your project is failing – Root causes

Step 2 – Big action items:

  • Establish allowable solutions for project rescue (including project termination)
  • Identify root causes of the problem
  • Identify risks to project continuation

Determine the root causes: Most times the cause of project problems is not immediately obvious. Even the best project managers — those with excellent project plans, appropriate budgets, and fantastic scope control — also struggle, on occasion, with project failure.

You’ll only get to the bottom of it by doing a Route Cause Analysis (RCA) and the “5 Why’s” technique can help with that. See “The 5 Whys for route cause analysis

Surface-level answers are often the temptation when project managers reach this step. They might focus on the complexity of their project, their outdated project management softwareor methodology, their unclear objectives or their stakeholders’ lack of involvement. All of these problems are so generic that they don’t provide enough insight to create real solutions.

Apply the “5 Whys” and be specific when answering these questions… i.e.

  • Why are objectives unclear?
  • Why aren’t users getting involved?
  • Why are the estimates wrong?

Of course, some of these answers may be hard to hear, and solutions can range from the challenging to impossible. Remember: if these issues could be easily remedied, they would have been addressed and resolved. Even simple problems — like a team member leaving — can take months to fix. Ask yourself: are you using the right technology for the job? Are your dependencies so external that project control is simply out of your hands?

If you’re still struggling to figure out where the root of your project failure is, consider these seven issues – the most common causes of project failure.

  • Complexity
  • External
  • Financial
  • Operational
  • Organizational
  • Schedule
  • Technology

Risk Assessment:What are the risks when trying to salvage the project? Are those risks worth it? Is the project salvageable? Answer these questions before moving on.

Step 3: War Room

Step 3 – Big action items:

  • Set up the war room
  • Re-engage stakeholders
  • Create a tentative plan to move forward

Okay, General!

Assemble the team, seat them all together, and work through a rescue workshop. You’re in the mentality of “kill” or “fix” you’re done fact finding, asking question for further research, or finding other excuses to delay the process. That should all have been done in step two. You’re focussed to figure out what to do with your project.

The “war room” will be intense – all members need to be prepared and the right mindset  of problem solving!

The decision-making process could take two hours or several days. All key decision makers must be present. As this is not always possible some executives may prefer to be called in as the meeting is nearing its end, where team members can present prepared options.

To get the most out of the workshop, conduct the meeting face to face (take the meeting offline). Try to limit the meeting to ten people, including the most important stakeholders (like the sponsors), project manager, senior team members including technical representative to give insight to plan feasibility.

The war room is serious business –  prepare for it. Create an agenda to go over findings, from quantitative reporting to team member interviews. Encourage pre-war-room collaboration (covering the outcomes of steps 1 and 2) toward the ideal shared result.

When you start the war room meeting, all project material should be readily available. That’s your fact base driving factual data driven assumptions and decisions.

Using the facts, the purpose of the war room, in essence, is to answer three deceptively complex questions:

  • Is the business case still valid?
  • If the business case is no longer valid, is there potential for a new, reimagined, justified business case?
  • (If so): Are the added costs for project rescue worth it?

Encourage your task force to focus on identifying the project’s primary drivers (i.e. business need/value, budget, schedule, scope, or quality). Ideally, there should only be one driver that controls the outcome of the project – this is usually the business need for the project’s deliverables.

Sometimes the primary driver is beyond repair. For example, if the core due date has passed and it was aligned with a market cycle (ex: Black Friday to Christmas), then the project is irremediable.

Least case scenario: Clearly articulate the primary goal. Then identify what the team can do with the least amount of effort. Develop a scenario that costs the company the least and gets closest to achieve the primary goal.

Project termination considerations: If the primary goal cannot be achieved, prepare a recommendation to terminate the project… but not without scrutiny. Several variables must be considered and thoroughly addressed in the war room.

  • Consider trade-offs that could make the worst-case scenario more possible than originally thought.
  • Think about the potential backlash from killing a project.
    • How does that decision affect business strategy?
    • Other projects?
    • Public perceptions?
    • Potential future clients? All these variables must be considered and thoroughly addressed in the war room.

Alternatives: Should the least-case scenario makes sense, explore more alternatives. Are there alternative options that can deliver more of the project’s objectives, and consider how adding those solutions to your plan can create additional potential scenarios — positive or negative.

New project charter: Write down the main points of your plan in a revised project charter.

Replacement project option: It’s not uncommon for stakeholders to propose a replacement project instead of a rescue. That’s a totally viable option — kill the project, salvaging only essential, functional portions of the original attempt, and work to create a new plan.If the decision is to completely start over, abandon project rescue altogether. Justify the replacement project on its own merit (a new scope, budget, resource plan, etc.)

Step 4: Set your project in motion

Step 4 – Big action items:

  • Finalise how your project will move forward
  • Confirm responsibilities
  • Reset organizational expectations.

Following your war room meeting, your next steps are all about follow-up. The real project rescue starts here and is the most challenging part of project rescue.

Re-engage stakeholders around the contents of the new project plan and complete the detail with precise commitments for each team member. Plans should be finalised within two days.

Be careful as hesitation and procrastination can limit team commitment and lower morale. You’re the general; get your troops ready to re-engage and to stay committed and focussed!

Reconfirm all project metrics: Validate all project aspects especially resources, as people has been allocation to productive work while you were reworking your rescue plan.

As the project rolls forward, be sure to detail the new project’s profile, scope, and size to the core team and beyond. Emphasize expected outcomes and explain how this project aligns with the company’s goals. Don’t shy away from communicating what these changes can mean on a big-picture scale. While you may receive some feedback, be direct: the project is proceeding.

Make sure all communication is clear. Confirm that stakeholders accept their new responsibilities to the project.

Case Study: Renier Botha’s Role as Non-Executive Director at KAMOHA Tech

Introduction

In this case study, we examine the strategic contributions of Renier Botha, a Non-Executive Director (NED) at KAMOHA Tech, a company specialising in Robotic Process Automation (RPA) and IT Service Management (ITSM). Botha’s role involves guiding the company through corporate governance and product development to establish KAMOHA Tech as a standalone IT service provider.

Background of KAMOHA Tech

KAMOHA Tech operates within the rapidly evolving IT industry, focusing on RPA and ITSM solutions. These technologies are crucial for businesses looking to automate processes and enhance their IT service offerings, thereby increasing efficiency and reducing costs.

Role and Responsibilities of Renier Botha

Renier Botha joined KAMOHA Tech with a wealth of experience in IT governance and service management. His primary responsibilities as a NED include:

  • Corporate Governance: Ensuring that KAMOHA Tech adheres to the highest standards of corporate governance, which is essential for the company’s credibility and long-term success. Botha’s oversight ensures that the company’s operations are transparent and align with shareholder interests.
  • Strategic Guidance on Product and Service Development: Botha plays a pivotal role in shaping the strategic direction of KAMOHA Tech’s product offerings in RPA and ITSM. His expertise helps in identifying market needs and aligning the product development to meet these demands.
  • Mentoring and Leadership: As a NED, Botha also provides mentoring to the executive team, offering insights and advice drawn from his extensive experience in the IT industry. His guidance is crucial in steering the company through phases of growth and innovation.

Impact of Botha’s Involvement

Botha’s contributions have had a significant impact on KAMOHA Tech’s trajectory:

  • Enhanced Governance Practices: Under Botha’s guidance, KAMOHA Tech has strengthened its governance frameworks, which has improved investor confidence and positioned the company as a reliable partner in the IT industry.
  • Product Innovation and Market Fit: Botha’s strategic insights into the RPA and ITSM sectors have enabled KAMOHA Tech to innovate and develop products that are well-suited to the market’s needs. This has been crucial in distinguishing KAMOHA Tech from competitors and capturing a larger market share.
  • Sustainable Growth: Botha’s emphasis on sustainable practices and long-term strategic planning has positioned KAMOHA Tech for sustainable growth. His influence ensures that the company does not only focus on immediate gains but also invests in long-term capabilities.

Challenges and Solutions

Despite the successes, Botha’s role involves navigating challenges such as:

  • Adapting to Market Changes: The IT industry is known for its rapid changes. Botha’s experience has been instrumental in helping the company quickly adapt to these changes by foreseeing industry trends and aligning the company’s strategy accordingly.
  • Balancing Innovation with Governance: Ensuring that innovation does not come at the expense of governance has been a delicate balance. Botha has managed this by setting clear boundaries and ensuring that all innovations adhere to established governance protocols.

Conclusion

Renier Botha’s role as a Non-Executive Director at KAMOHA Tech highlights the importance of experienced leadership in navigating the complexities of the IT sector. His strategic guidance in corporate governance and product development has not only enhanced KAMOHA Tech’s market position but has also set a foundation for its future growth. As KAMOHA Tech continues to evolve, Botha’s ongoing influence will be pivotal in maintaining its trajectory towards becoming an independent and robust IT service provider.

Cyber-Security 101 for Business Owners

Running a business require skill with multiple things happening simultaneously that require your attention. One of those critical things is cyber-security – critical today to have your focus on.

In the digital world today, all businesses have a dependency on the Internet in one way or the other… For SMEs (Small Medium Enterprise) that uses the Internet exclusively as their sales channel the Internet is not only a source of opportunity but the lifeblood of the organisation. An enterprise has the ability, through the Internet, to operate 24×7 with digitally an enabled workforce bringing unprecedented business value.

Like any opportunity though, this also comes with a level of risk that must be mitigated and continuously governed, not just by the board but also by every member within the team. Some of these risks can have a seriously detrimental impact to the business, ranging from financial and data loss to downtime and reputational damage. It is therefore your duty ensuring your IT network is fully protected and secure to protect your business.

Statistics show that cybercrime is exponentially rising. This is mainly due to enhancements in technology enabling and giving access to inexpensive but sophisticated tools. Used by experienced and inexperienced cyber criminals alike, this is causing havoc across networks resulting in business downtime that costs the economy millions every year.

If your business is not trading for 100 hours, what is the financial and reputational impact? That could be the downtime caused by, for example, a ransomware attack – yes, that’s almost 5 days of no business, costly for any business!

Understanding the threat

Cyber threats take many forms and is an academic subject on it’s own. So where do you start?

First you need to understand the threat before you can take preventative action.

Definition: Cyber security or information technology security are the techniques of protecting computers, networks, programs and data from unauthorized access or attacks that are aimed for exploitation.

A good start is to understand the following cyber threats:

  • Malware
  • Worms
  • Trojans
  • IoT (Internet of Things)
  • Crypto-jacking

Malware

Definition:Malware (a portmanteau for malicious software) is any software intentionally designed to cause damage to a computer, server, client, or computer network.

During 2nd Q’18, the VPNFilter malware reportedly infected more than half a million small business routers and NAS devices and malware is still one of the top risks for SMEs. With the ability of data exfiltration back to the attackers, businesses are at risk of the loss of sensitive information such as usernames and passwords.

Potentially these attacks can remain hidden and undetected. Businesses can overcome these styles of attacks by employing an advanced threat prevention solution for their endpoints (i.e. user PCs). A layered approach with multiple detection techniques will give businesses full attack chain protection as well as reducing the complexity and costs associated with the deployment of multiple individual solutions.

Worms

Definition:A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. Often, it uses a computer network to spread itself, relying on security failures on the target computer to access it.

Recent attacks, including WannaCry and Trickbot, used worm functionality to spread malware. The worm approach tends to make more noise and can be detected faster, but it has the ability to affect a large number of victims very quickly.For businesses, this may mean your entire team can be impacted (spreading to every endpoint in the network) before the attack can be stopped.

Approximately 20% of UK businesses that had been infected with malware had to cease business operations immediately resulting in lost revenue.

Internet of Things (IoT)

Definition:The Internet of things (IoT) is the network of devices such as vehicles, and home appliances that contain electronics, software, actuators, and connectivity.

More devices are able to connect directly to the web, which has a number of benefits, including greater connectivity, meaning better data and analytics. However, various threats and business risks are lurking in the use of these devices, including data loss, data manipulation and unauthorised access to devices leading to access to the network, etc.

To mitigate this threat, devices should have strict authentication, limited access and heavily monitored device-to-device communications. Crucially, these devices will need to be encrypted – a responsibility that is likely to be driven by third-party security providers but should to be enforced by businesses as part of their cyber-security policies and standard operating procedures.

Cryptojacking

Definition:Cryptojacking is defined as the secret use of your computing device to mine cryptocurrency. Cryptojacking used to be confined to the victim unknowingly installing a program that secretly mines cryptocurrency.

With the introduction and rise in popularity and value of crypto currencies, cryptojacking emerged as a cyber-security threat. On the surface, cryptomining may not seem particularly malicious or damaging, however, the costs that it can incur are. If the cryptomining script gets into servers, it can send energy bills through the roof or, if you find it has reached your cloud servers, can hike up usage bills (the biggest commercial concern for IT operations utilising cloud computing). It can also pose a potential threat to your computer hardware from overloading CPUs.

A recent survey, 1 in 3 of all UK businesses were hit by cryptojacking with statistics rising.

Mitigating the risk 

With these few simple and easy steps you can make a good start in protecting your business:

  • Education: At the core of any cyber-security protection plan, there needs to be an education campaign for all in the business. They must understand the gravity of the threat posed – regular training sessions can help here. And this shouldn’t be viewed as a one-off box-ticking exercise then forgotten about. Having rolling, regularly updated training sessions will ensure that staff members are aware of the changing threats and how they can best be avoided.
  • Endpoint protection: Adopt a layered approach to cyber security and deploy endpoint protection that monitor processes in real-time and seek out suspicious patterns, enhancing threat hunting capabilities that eliminate threats (quarantine or delete), and reducing the downtime and impact of attacks.
  • Lead by example: Cyber-security awareness should come from the top down. The time is long gone where cyber-security has been the domain of IT teams. If you are a business stakeholder, you need to lead by example by promoting and practicing a security-first mindset.

Different Software Testing – Explained

Testing of software and application is an integral part of the software development and deployment lifecycle. But with so many different types of tests to choose from when compiling your test approach, which are best suited for your requirements?

In this post 45 different tests are explained.

Software Application Testing are conducted within two domains: Functional and Non-Functional Testing.

Functional testing is a software testing process used within softwaredevelopment in which software is tested to ensure that it conforms with all requirements. Functional testing is a way of checking software to ensure that it has all the required functionality that’s specified within its functional requirements.

Functional testing types include:

  • Unit testing
  • Integration testing
  • System testing
  • Sanity testing
  • Smoke testing
  • Interface testing
  • Regression testing
  • Beta/Acceptance testing

Non-functional testing is defined as a type of Software testing to check non-functional aspects (performance, usability, reliability, etc) of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed by functional testing.

Non-functional testing types include:

  • Performance Testing
  • Load testing
  • Stress testing
  • Volume testing
  • Security testing
  • Compatibility testing
  • Install testing
  • Recovery testing
  • Reliability testing
  • Usability testing
  • Compliance testing
  • Localization testing

45 Different types of testing – explained

  1. Alpha Testing

It is the most common type of testing used in the Software industry. The objective of this testing is to identify all possible issues or defects before releasing it into the market or to the user. Alpha testing is carried out at the end of the software development phase but before the Beta Testing. Still, minor design changes may be made as a result of such testing. Alpha testing is conducted at the developer’s site. In-house virtual user environment can be created for this type of testing.

  1. Acceptance Testing

An acceptance test is performed by the client and verifies whether the end to end the flow of the system is as per the business requirements or not and if it is as per the needs of the end user. Client accepts the software only when all the features and functionalities work as expected. It is the last phase of the testing, after which the software goes into production. This is also called as User Acceptance Testing (UAT).

  1. Ad-hoc Testing

The name itself suggests that this testing is performed on an ad-hoc basis i.e. with no reference to test case and also without any plan or documentation in place for such type of testing. The objective of this testing is to find the defects and break the application by executing any flow of the application or any random functionality.

Ad-hoc testing is an informal way of finding defects and can be performed by anyone in the project. It is difficult to identify defects without a test case but sometimes it is possible that defects found during ad-hoc testing might not have been identified using existing test cases.

  1. Accessibility Testing

The aim of accessibility testing is to determine whether the software or application is accessible for disabled people or not. Here disability means deaf, color blind, mentally disabled, blind, old age and other disabled groups. Various checks are performed such as font size for visually disabled, color and contrast for color blindness etc.

  1. Beta Testing

Beta Testing is a formal type of software testing which is carried out by the customer. It is performed in Real Environment before releasing the product to the market for the actual end users. Beta testing is carried out to ensure that there are no major failures in the software or product and it satisfies the business requirements from an end-user perspective. Beta testing is successful when the customer accepts the software.

Usually, this testing is typically done by end-users or others. It is the final testing done before releasing an application for commercial purpose. Usually, the Beta version of the software or product released is limited to a certain number of users in a specific area. So end user actually uses the software and shares the feedback to the company. Company then takes necessary action before releasing the software to the worldwide.

  1. Back-end Testing

Whenever an input or data is entered on front-end application, it stores in the database and the testing of such database is known as Database Testing or Backend testing. There are different databases like SQL Server, MySQL, and Oracle etc. Database testing involves testing of table structure, schema, stored procedure, data structure and so on.

In back-end testing GUI is not involved, testers are directly connected to the database with proper access and testers can easily verify data by running a few queries on the database. There can be issues identified like data loss, deadlock, data corruption etc during this back-end testing and these issues are critical to fixing before the system goes live into the production environment

  1. Browser Compatibility Testing

It is a subtype of Compatibility Testing (which is explained below) and is performed by the testing team.

Browser Compatibility Testing is performed for web applications and it ensures that the software can run with the combination of different browser and operating system. This type of testing also validates whether web application runs on all versions of all browsers or not.

  1. Backward Compatibility Testing

It is a type of testing which validates whether the newly developed software or updated software works well with older version of the environment or not.

Backward Compatibility Testing checks whether the new version of the software works properly with file format created by older version of the software; it also works well with data tables, data files, data structure created by older version of that software. If any of the software is updated then it should work well on top of the previous version of that software.

  1. Black Box Testing

Internal system design is not considered in this type of testing. Tests are based on the requirements and functionality.

Detailed information about the advantages, disadvantages, and types of Black box testing can be seen here.

  1. Boundary Value Testing

This type of testing checks the behavior of the application at the boundary level.

Boundary value Testing is performed for checking if defects exist at boundary values. Boundary value testing is used for testing a different range of numbers. There is an upper and lower boundary for each range and testing is performed on these boundary values.

If testing requires a test range of numbers from 1 to 500 then Boundary Value Testing is performed on values at 0, 1, 2, 499, 500 and 501.

  1. Branch Testing

It is a type of white box testing and is carried out during unit testing. Branch Testing, the name itself suggests that the code is tested thoroughly by traversing at every branch.

  1. Comparison Testing

Comparison of a product’s strength and weaknesses with its previous versions or other similar products is termed as Comparison Testing.

  1. Compatibility Testing

It is a testing type in which it validates how software behaves and runs in a different environment, web servers, hardware, and network environment. Compatibility testing ensures that software can run on a different configuration, different database, different browsers and their versions. Compatibility testing is performed by the testing team.

  1. Component Testing

It is mostly performed by developers after the completion of unit testing. Component Testing involves testing of multiple functionalities as a single code and its objective is to identify if any defect exists after connecting those multiple functionalities with each other.

  1. End-to-End Testing

Similar to system testing, End-to-end testing involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

  1. Equivalence Partitioning

It is a testing technique and a type of Black Box Testing. During this equivalence partitioning, a set of group is selected and a few values or numbers are picked up for testing. It is understood that all values from that group generate the same output. The aim of this testing is to remove redundant test cases within a specific group which generates the same output but not any defect.

Suppose, application accepts values between -10 to +10 so using equivalence partitioning the values picked up for testing are zero, one positive value, one negative value. So the Equivalence Partitioning for this testing is: -10 to -1, 0, and 1 to 10.

  1. Example Testing

It means real-time testing. Example testing includes the real-time scenario, it also involves the scenarios based on the experience of the testers.

  1. Exploratory Testing

Exploratory Testing is an informal testing performed by the testing team. The objective of this testing is to explore the application and looking for defects that exist in the application. Sometimes it may happen that during this testing major defect discovered can even cause system failure.

During exploratory testing, it is advisable to keep a track of what flow you have tested and what activity you did before the start of the specific flow.

An exploratory testing technique is performed without documentation and test cases.

  1. Functional Testing

This type of testing ignores the internal parts and focuses only on the output to check if it is as per the requirement or not. It is a Black-box type testing geared to the functional requirements of an application. For detailed information about Functional Testing click here.

  1. Graphical User Interface (GUI) Testing

The objective of this GUI testing is to validate the GUI as per the business requirement. The expected GUI of the application is mentioned in the Detailed Design Document and GUI mockup screens.

The GUI testing includes the size of the buttons and input field present on the screen, alignment of all text, tables and content in the tables.

It also validates the menu of the application, after selecting different menu and menu items, it validates that the page does not fluctuate and the alignment remains same after hovering the mouse on the menu or sub-menu.

  1. Gorilla Testing

Gorilla Testing is a testing type performed by a tester and sometimes by developer the as well. In Gorilla Testing, one module or the functionality in the module is tested thoroughly and heavily. The objective of this testing is to check the robustness of the application.

  1. Happy Path Testing

The objective of Happy Path Testing is to test an application successfully on a positive flow. It does not look for negative or error conditions. The focus is only on the valid and positive inputs through which application generates the expected output.

  1. Incremental Integration Testing

Incremental Integration Testing is a Bottom-up approach for testing i.e continuous testing of an application when a new functionality is added. Application functionality and modules should be independent enough to test separately. This is done by programmers or by testers.

  1. Install/Uninstall Testing

Installation and uninstallation testing is done on full, partial, or upgrade install/uninstall processes on different operating systems under different hardware or software environment.

  1. Integration Testing

Testing of all integrated modules to verify the combined functionality after integration is termed as Integration Testing. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

  1. Load Testing

It is a type of non-functional testing and the objective of Load testing is to check how much of load or maximum workload a system can handle without any performance degradation.

Load testing helps to find the maximum capacity of the system under specific load and any issues that cause the software performance degradation. Load testing is performed using tools like JMeter, LoadRunner, WebLoad, Silk performer etc.

  1. Monkey Testing

Monkey testing is carried out by a tester assuming that if the monkey uses the application then how random input, values will be entered by the Monkey without any knowledge or understanding of the application. The objective of Monkey Testing is to check if an application or system gets crashed by providing random input values/data. Monkey Testing is performed randomly and no test cases are scripted and it is not necessary to

Monkey Testing is performed randomly and no test cases are scripted and it is not necessary to be aware of the full functionality of the system.

  1. Mutation Testing

Mutation Testing is a type of white box testing in which the source code of one of the program is changed and verifies whether the existing test cases can identify these defects in the system. The change in the program source code is very minimal so that it does not impact the entire application, only the specific area having the impact and the related test cases should able to identify those errors in the system.

  1. Negative Testing

Testers having the mindset of “attitude to break” and using negative testing they validate that if system or application breaks. A negative testing technique is performed using incorrect data, invalid data or input. It validates that if the system throws an error of invalid input and behaves as expected.

  1. Non-Functional Testing

It is a type of testing for which every organization having a separate team which usually called as Non-Functional Test (NFT) team or Performance team.

Non-functional testing involves testing of non-functional requirements such as Load Testing, Stress Testing, Security, Volume, Recovery Testing etc. The objective of NFT testing is to ensure whether the response time of software or application is quick enough as per the business requirement.

It should not take much time to load any page or system and should sustain during peak load.

  1. Performance Testing

This term is often used interchangeably with ‘stress’ and ‘load’ testing. Performance Testing is done to check whether the system meets the performance requirements. Different performance and load tools are used to do this testing.

  1. Recovery Testing

It is a type of testing which validates that how well the application or system recovers from crashes or disasters.

Recovery testing determines if the system is able to continue the operation after a disaster. Assume that application is receiving data through the network cable and suddenly that network cable has been unplugged. Sometime later, plug the network cable; then the system should start receiving data from where it lost the connection due to network cable unplugged.

  1. Regression Testing

Testing an application as a whole for the modification in any module or functionality is termed as Regression Testing. It is difficult to cover all the system in Regression Testing, so typically automation testing tools are used for these types of testing.

  1. Risk-Based Testing (RBT)

In Risk Based Testing, the functionalities or requirements are tested based on their priority. Risk-based testing includes testing of highly critical functionality, which has the highest impact on business and in which the probability of failure is very high. The priority decision is based on the business need, so once priority is set for all functionalities then high priority functionality or test cases are executed first followed by medium and then low priority functionalities.

The low priority functionality may be tested or not tested based on the available time. The Risk-based testing is carried out if there is insufficient time available to test entire software and software needs to be implemented on time without any delay. This approach is followed only by the discussion and approval of the client and senior management of the organization.

  1. Sanity Testing

Sanity Testing is done to determine if a new software version is performing well enough to accept it for a major testing effort or not. If an application is crashing for the initial use then the system is not stable enough for further testing. Hence a build or an application is assigned to fix it.

  1. Security Testing

It is a type of testing performed by a special team of testers. A system can be penetrated by any hacking way.

Security Testing is done to check how the software or application or website is secure from internal and external threats. This testing includes how much software is secure from the malicious program, viruses and how secure and strong the authorization and authentication processes are.

It also checks how software behaves for any hackers attack and malicious programs and how software is maintained for data security after such a hacker attack.

  1. Smoke Testing

Whenever a new build is provided by the development team then the software testing team validates the build and ensures that no major issue exists. The testing team ensures that build is stable and a detailed level of testing is carried out further. Smoke Testing checks that no show stopper defect exists in the build which will prevent the testing team to test the application in detail.

If testers find that the major critical functionality is broken down at the initial stage itself then testing team can reject the build and inform accordingly to the development team. Smoke Testing is carried out to a detailed level of any functional or regression testing.

  1. Static Testing

Static Testing is a type of testing which is executed without any code. The execution is performed on the documentation during the testing phase. It involves reviews, walkthrough, and inspection of the deliverables of the project. Static testing does not execute the code instead of the code syntax, naming conventions are checked.

The static testing is also applicable for test cases, test plan, design document. It is necessary to perform static testing by the testing team as the defects identified during this type of testing are cost-effective from the project perspective.

  1. Stress Testing

This testing is done when a system is stressed beyond its specifications in order to check how and when it fails. This is performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to the system or database load.

  1. System Testing

Under System Testing technique, the entire system is tested as per the requirements. It is a Black-box type testing that is based on overall requirement specifications and covers all the combined parts of a system.

  1. Unit Testing

Testing an individual software component or module is termed as Unit Testing. It is typically done by the programmer and not by testers, as it requires a detailed knowledge of the internal program design and code. It may also require developing test driver modules or test harnesses.

  1. Usability Testing

Under Usability Testing, User-friendliness check is done. Application flow is tested to know if a new user can understand the application easily or not, Proper help documented if a user gets stuck at any point. Basically, system navigation is checked in this testing.

  1. Vulnerability Testing

The testing which involves identifying of weakness in the software, hardware and the network is known as Vulnerability Testing. Malicious programs, the hacker can take control of the system, if it is vulnerable to such kind of attacks, viruses, and worms.

So it is necessary to check if those systems undergo Vulnerability Testing before production. It may identify critical defects, flaws in the security.

  1. Volume Testing

Volume testing is a type of non-functional testing performed by the performance testing team.

The software or application undergoes a huge amount of data and Volume Testing checks the system behavior and response time of the application when the system came across such a high volume of data. This high volume of data may impact the system’s performance and speed of the processing time.

  1. White Box Testing

White Box testing is based on the knowledge about the internal logic of an application’s code.

It is also known as Glass box Testing. Internal software and code working should be known for performing this type of testing. Under this tests are based on the coverage of code statements, branches, paths, conditions etc.

Artificial Intelligence Capabilities

AI is one of the most popular talked about technologies today. For business, this technology introduces capabilities that innovative business and technology leadership can utilise to introduce new dimensions and abilities within service and product design and delivery.

Unfortunately, a lot of the real business value is locked up behind the terminology hype, inflated expectations and insecure warnings of machine control.

It is impossible to get the value from something that is not understood. So lets cut through the hype and focus to understand AI’s objectives and the key capabilities that this exciting technology enables.

There are many definitions of AI as discussed in the blog post “What is Artificial Intelligence: Definitions“.

Keeping it simple: “AI is using computers to do things that normally would have required human intelligence.” With this definition in mind, there are basically three things that AI is aiming to achieve.

3 AI Objectives

  • Capturing Information
  • Determine what is happening
  • Understand why it is happening

Lets use an example to demonstrate this…

As humans we are constantly gathering data through our senses which is converted by our brain into information which is interpreted for understanding and potential action. You can for example identify an object through site, turn it into information and identify the object instantly as, for example, a lion. In conjunction, additional data associated with the object at the present time, for example the lion is running after a person yelling for help, enables us to identify danger and to take immediate action…

For a machine, this process is very complex and requires large amounts of data, programming/training and processing power. Today, technology is so advanced that small computers like smart phones can capture a photo, identify a face and link it to a name. This is achieved not just through the power the smart phone but through the capabilities of AI, made available through services like facebook supported by an IT platform including, a fast internet connection, cloud computing power and storage.

To determine what is happening the machine might use Natural Language Understanding (NLU) to extract the words from a sound file and try to determine meaning or intent, hence working out that the person is running away from a lion and shouting for you to run away as well.

Why the lion is chasing and why the person is running away, is not known by the machine. Although the machine can capture information and determine what is happening, it does not understand why it is happening within full context – it is merely processing data. This reasoning ability, to bring understanding to a situation, is something that the human brain does very well.

Dispite all the technological advancements, can machines today only achieve the first two of the thee AI objectives. With this in mind, let’s explore the eight AI capabilities relevant and ready for use, today.

8 AI Capabilities

AI-8Capabilities

  • Capturing Information
    • 1. Image Recognition
    • 2. Speech Recognition
    • 3. Data Search
    • 4. Data Patterns
  • Determine what is happening
    • 5. Language Understanding
    • 6. Thought/Decision Process
    • 7. Prediction
  • Understand why it is happening
    • 8. Understanding

1. Image Recognition

This is the capability for a machine to identify/recognise an image. This is based on Machine Learning and requires millions of images to train the machine requiring lots of storage and fast processing power.

2. Speech Recognition

The machine takes a sound file and encodes it into text.

3. Search

The machine identifies words or sentences which are matched with relevant content within a large about of data. Once these word matches are found it can trigger further AI capabilities.

4. Patterns

Machines can process and spot patterns in large amounts of data which can be combinations of sound, image or text. This surpasses the capability of humans, literally seeing the woods from the trees.

5. Language Understanding

The AI capability to understand human language is called Natural Language Understanding or NLU.

6. Thought/Decision Processing

Knowledge Maps connects concepts (i.e. person, vehicle) with instances (i.e. John, BMW) and relationships (i.e. favourite vehicle). Varying different relationships by weight and/or probabilities of likelihood cn fine tune the system to make recommendations when interacted with. Knowledge Maps are not decision trees as the entry point of interaction can be at any point within the knowledge map as long as a clear goal has been defined (i.e. What is John’s favourite vehicle?)

7. Prediction

Predictive analytics is not a new concept and the AI prediction capability basically takes a view on historic data patterns and matches it with a new piece of data to predict a similar outcome based on the past.

8. Understanding

Falling under the third objective of AI – Understand what is happening, this capability is not currently commercially available.

To Conclude

In understanding the capabilities of AI you can now look beyond the hype, be realistic and identify which AI capabilities are right to enhance your business.

In a future blog post, we’ll examine some real live examples of how these AI capabilities can be used to bring business value.

Also read:

Release Management as a Competitive Advantage

“Delivery focussed”, “Getting the job done”, “Results driven”, “The proof is in the pudding” – we are all familiar with these phrases and in Information Technology it means getting the solutions into operations through effective Release Management, quickly.

In the increasingly competitive market, where digital is enabling rapid change, time to market is king. Translated into IT terms – you must get your solution into production before the competition does, through an effective ability to do frequent releases. Doing frequent releases benefit teams as features can be validated earlier and bugs detected and resolved rapidly. The smaller iteration cycles provide flexibility, making adjustments to unforeseen scope changes easier and reducing the overall risk of change while rapidly enhancing stability and reliability in the production environment.

IT teams with well governed agile and robust release management practices have a significant competitive advantage. This advantage materialises through self-managed teams consisting of highly skilled technologist who collaborative work according to a team defined release management process enabled by continuous integration and continuous delivery (CICD), that continuously improves through constructive feedback loops and corrective actions.

The process of implementing such agile practices, can be challenging as building software becomes increasingly more complex due to factors such as technical debt, increasing legacy code, resource movements, globally distributed development teams, and the increasing number of platforms to be supported.

To realise this advantage, an organisation must first optimise its release management process and identify the most appropriate platform and release management tools.

Here are three well known trends that every technology team can use to optimise delivery:

1. Agile delivery practises – with automation at the core 

So, you have adopted an agile delivery methodology and you’re having daily scrum meetings – but you know that is not enough. Sprint planning as well as review and retrospection are all essential elements for a successful release, but in order to gain substantial and meaningful deliverables within the time constraints of agile iterations, you need to invest in automation.

An automation ability brings measurable benefits to the delivery team as it reduces the pressure on people in minimising human error and increasing overall productivity and delivery quality into your production environment that shows in key metrics like team velocity. Another benefit automation introduces is consistent and repeatable process, enabling easily scalable teams while reducing errors and release times. Agile delivery practices (see “Executive Summary of 4 commonly used Agile Methodologies“) all embrace and promote the use of automation across the delivery lifecycle, especially in build, test and deployment automation. Proper automation support delivery teams in reducing overhead of time-consuming repetitive tasks in configuration and testing so them can focus on the core of customer centric product/service development with quality build in. Also read How to Innovate to stay Relevant“; “Agile Software Development – What Business Executives need to know” for further insight in Agile methodologies…

Example:

Code Repository (version Control) –> Automated Integration –> Automated Deployment of changes to Test Environments –> Platform & Environment Changes automated build into Testbed –> Automated Build Acceptance Tests –> Automated Release

When a software developer commits changes to the version control, these changes automatically get integrated with the rest of the modules. Integrated assembles are then automatically deployed to a test environment – changes to the platform or the environment, gets automatically built and deployed on the test bed. Next, build acceptance tests are automatically kicked off, which would include capacity tests, performance, and reliability tests. Developers and/or leads are notified only when something fails. Therefore, the focus remains on core development and not just on other overhead activities. Of course, there will be some manual check points that the release management team will have to pass in order to trigger next the phase, but each activity within this deployment pipeline can be more or less automated. As your software passes all quality checkpoints, product version releases are automatically pushed to the release repository from which new versions can be pulled automatically by systems or downloaded by customers.

Example Technologies:

  • Build Automation:  Ant, Maven, Make
  • Continuous Integration: Jenkins, Cruise Control, Bamboo
  • Test Automation: Silk Test, EggPlant, Test Complete, Coded UI, Selenium, Postman
  • Continuous Deployment: Jenkins, Bamboo, Prism, Microsoft DevOps

2. Cloud platforms and Virtualisation as development and test environments

Today, most software products are built to support multiple platforms, be it operating systems, application servers, databases, or Internet browsers. Software development teams need to test their products in all of these environments in-house prior to releasing them to the market.

This presents the challenge of creating all of these environments as well as maintaining them. These challenges increase in complexity as development and test teams become more geographically distributed. In these circumstances, the use of cloud platforms and virtualisation helps, especially as these platforms have recently been widely adopted in all industries.

Automation on cloud and virtualised platforms enables delivery teams to rapidly spin up/down environments optimising infrastructure utilisation aligned with demand while, similar to maintaining code and configuration version history for our products, also maintain the version history of all supported platforms. Automated cloud platforms and virtualisation introduces flexibility that optimises infrastructure utilisation and the delivery footprint as demand changes – bringing savings across the overall delivery life-cycle.

Example:

When a build and release engineer changes configurations for the target platform – the operating system, database, or application server settings – the whole platform can be built and a snapshot of it created and deployed to the relevant target platforms.

Virtualisation: The virtual machine (VM) is automatically provisioned from the snapshot of base operating system VM, appropriate configurations are deployed and the rest of the platform and application components are automatically deployed.

Cloud: Using a solution provider like Azure or AWS to deliver Infrastructure-as-a-Service (IaaS) and Platform as a Service (PaaS), new configurations can be introduced in a new environment instance, instantiated, and configured as an environment for development, testing, staging or production hosting. This is crucial for flexibility and productivity, as it takes minutes instead of weeks to adapt to configuration changes. With automation, the process becomes repeatable, quick, and streamlines communication across different teams within the Tech-hub.

3. Distributed version control systems

Distributed version control systems (DVCS), for example GIT, Perforce or Mercurial, introduces flexibility for teams to collaborate at the code level. The fundamental design principle behind DVCS is that each user keeps a self-contained repository with complete version history on one’s local computer. There is no need for a privileged master repository, although most teams designate one as a best practice. DVCS allow developers to work offline and commit changes locally.

As developers complete their changes for an assigned story or feature set, they push their changes to the central repository as a release candidate. DVCS offers a fundamentally new way to collaborate, as  developers can commit their changes frequently without disrupting the main codebase or trunk. This becomes useful when teams are exploring new ideas or experimenting as well as enabling rapid team scalability with reduced disruption.

DVCS is a powerful enabler for the team that utilise an agile-feature-based branching strategy. This encourages development teams to continue to work on their features (branches) as they get ready, having fully tested their changes locally, to load them into next release cycle. In this scenario, developers are able to work on and merge their feature branches to a local copy of the repository.After standard reviews and quality checks will the changes then be merged into the main repository.

To conclude

Adopting these three major trends in the delivery life-cycle enables a organisation to imbed proper release management as a strategic competitive advantage. Implementing these best practices will obviously require strategic planning and an investment of time in the early phases of your project or team maturity journey – this will reduce the organisational and change management efforts to get to market quicker.

Modular Operating Model for Strategy Agility

One of life’s real pleasures, is riding a motorcycle. The sense of freedom when it is just you, machine and the open road is something only sharing enthusiast would truly understand. Inspired, I recently completed a hobby project building the Lego Set 42063. The building blocks of this Technic model constructs the BMW R1200GS Adventure motorcycle, arguably the best allrounder, adapted to handle all road conditions. The same building blocks can also be used to build a futuristic flying scooter, or shall I call it a speedster in true Star Wars style… While building the model I was marvelled by the ingeniousness of the design and how the different components come together in a final product – fit for purpose today but easily adapted to be fit for future.

Lego-Technic-modular

This made me think about business agility – how can this modular approach be used within business. We know that SOA (Service Oriented Architecture) takes a modular approach in building adaptable software application and in the talk on “Structure Technology for Success – using SOA” I explained a modular approached to design a Service Orientated Organisation (SOO), to directly contribute to the business success.

Recently I’ve also written about how to construct a business Operating Models that delivers. Such an operating model aligns the business operations with the needs of it’s customers, while it provides the agility to continuously adapt to changes in this fast changing technological ecosystem we live in. An Operating Model that delivers, fit for purpose today but easy adaptable to be fit for the future, in other words – a Modular Operating Model.

As the environment for a company changes rapidly, static operating models lack the agility to respond. Successful companies are customer centric and embrace continuous innovation to enhance the ability of the organisation to re-design it’s operations. This requires an Operating Model that incorporates the agility to be responsive to changes in business strategy and customer needs. A modular operating model enables agility in business operations with a design that can respond to change by defining standard building blocks and how to dynamically combine them. Modular blocks (with the specific operational complexity contained) simplifies managing complexity. This reduces the time to produce a new operational outcome, irrespective of this being a new services, product or just an efficiency improvement within an existing value chain.  

An example of applying modular thinking to a operational delivery methodology is covered in the blog post: “How to Innovate to stay Relevant”. In combining the core principles and benefits of three different delivery methodologies, Design Thinking, Lean Startup and Agile Scrum as modular building blocks, a delivery methodology are constructed that ensures rapid delivery of innovation into customer centric revenue channels while optimising the chances of success through continuous alignment with customer and market demand.

A modular operating model imbeds operational agility through the ability to use, re-use, plug and play different capabilities, processes and resources (building blocks) tech-TOMto easily produce new business outcomes without having to deal with the complexities which are already defined within the individual building blocks – just like a Lego set using the same set of standardised and pre-defined blocks to build completely different things. The focus is on re-using the blocks and not on the design of the blocks itself. Off course a lot of thinking has gone into the design of the different building blocks, but through re-using the same block designs, the model design time is focussed on a new/different outcome and not on a component of an outcome.

Designing modular capabilities, processes and resources that are used to design operating models have benefits not just in efficiencies and savings through economies of scale, but also in the reduction of time to market. These benefits are easier to accomplish in larger multi-divisional organisation with multiple operating models or organisations with complex operating models bringing together multiple organisations and different locations, where the re-use of modular operating model blocks bring demonstrable efficiencies – but is also possible for smaller organisations and start-ups.

If you want a Operating Model that Delivers and are agile to adapt to the challenges introduced by new technologies and digital business models – ensure the Target Operating Model (TOM) design methodology focusses on modular thinking from the offset and through the design process.

renierbotha Ltd has a demonstrable track record of compiling and delivering visionary Target Operating Models.

Talk to us – we can help you with the Digital Transformation to align your business operations and business model to the modern customers expectations.

 

Also read…

An Operating Model that Delivers

Every organisation that I have worked with around the world, whether it is in London, Johannesburg, Sydney, Singapore, Dallas, Kuala Lumpir, Las-Vegas, Nairobi or New York, there was always reference to a Target Operating Model (TOM) when business leaders spoke about business strategy and performance. Yes, the TOM – the ever eluding state of euphoria when all business operations work together in harmony to deliver the business vision…sometime in the near foreseen future.

Most business transformation programmes are focussed to deliver a target operating model – transforming the business by introducing a new way of working that better aligns the business offering with it’s customer’s changing expectation. Millions in business change budgets have been invested in TOM design projects and 1000s of people have worked in these TOM projects of which some have delivered against the promise.

With the TOM as the defined deliverable, the targeted operational state and the outcome of the business transformation programme, it is very important that the designed TOM are actually fit for purpose. The TOM also has to lend itself to be easily adjustable in order to contribute to the agility of an organisation. The way the business is operating must be able to adapt to an ever changing technology driven world and the associated workforce. The quick evolving digital world is probably the main catalyst for transformation in organisations today – read “The Digital Transformation Necessity” for further insights…

Operating Model (OM)

The Operating Model uses key inputs from the Business Model and Strategy.

The Business Model focuses on the business’ customers, the associated product and service offerings – how the organisation creates value for it’s cliental – and the commercial proposition. Within the business model the business’s revenue streams and how those are contributing to the business value chain to generate profits, are decried. In other words, the Business Model envisages the What within the organisation.

Within the Business Strategy the plan to achieve specific goals are defined, as well as the metrics required to measure how successfully these are achieved. The business goals are achieved through the daily actions as defined within the Operating Model.

Typically an Operating Model takes the What from the Business Model in conjunction with the business strategy, and defines the Why, What, How, Who and With. It is the way in which the business model and strategy is executed by conducting the day to day business operations. Execution is key as no business can be successful by just having a business strategy, the execution of the operating model delivering the business strategy is the operative ingredient of success.

In order to document and describe how an organisation functions, the Operating model usually includes business capabilities and associated processes, the products and/or services being delivered, the roles and responsibilities of people within the business and how these are organised and governed within the business, the metrics defined to manage, monitor and control the performance of the organisation and then the underpinning Technology, Information Systems and Tools the business uses in delivering it’s services and/or products.

Analogy: A good analogy to describe the Operating Model is to compare it to the engine of F1 car. In 2016 the Mercedes Silver Arrow (the fastest car, driven by Lewis Hamilton (arguably the fastest driver), did not win because of engine and reliability problems. Instead the World Championship was won by Nico Rosberg, who had a better performing engine over the whole season. Nico benefited from a better operating model – he had the processes, data, systems and the people (including himself) to win. The mechanical failures that Lewis suffered, mostly not through fault of his own, were a result of failures somewhere within his operating model.

Target Operating Model (TOM)

The Target Operating Model (TOM) is a future state version of the Operating Model. To derive the TOM, the existing Operating Model is compared with the desired future state keeping the key aspects of an operating model in mind: Why, What, How, Where, Who and With. The TOM also cover two additional key aspects: the When & Where defined within the transformation programme to evolve from current to future states.

The difference between the “as is” Operating Model and the “to be” Target Operating Model, indicates the gap that the business must bridge in the execution of its Transformation Model/Strategy – the When and Where. To achieve the Target Operating Model usually require large transformation effort, executed as change & transformation programmes and projects.

ToBe (TOM) – AsIs (OM) = Transformation Model (TM)

Why >> Business Vision & Mission

What >> Business Model (Revenue channels through Products and Services – the Value Chain)

How >> Business Values & Processes & Metrics

Who >> Roles & Responsibilities (RACI)

With >> Tools, Technology and Information

Where & When >> Transformation Model/Strategy

Defining the TOM

A methodology to compile the Target Operating Model (TOM) is summarised by the three steps shown in the diagram below:

TOM Methodology
Inputs to the methodology:

  • Business Model
  • Business Strategy
  • Current Operating Model
  • Formaly documented information, processes, resource models, strategies, statistics, metrics…
  • Information gathered through interviews, meetings, workshops…

Methodology produces TOM Outputs:

  • Business capabilities and associated processes
  • Clearly defined and monetised catalogue of the products and/or services being delivered
  • Organisation structure indicating roles and responsibilities of people within the business and how these are organised and governed
  • Metrics specifically defined to manage, monitor and control the performance of the organisation
  • Underpinning Technology, Information Systems and Tools the business uses in delivering it’s services and/or products

The outputs from this methodology covers each key aspect needed for a TOM that will deliver on the desired business outcomes. Understanding these desired outcomes and the associated goals and milestones to achieve them, is hence a fundamental prerequisite in compiling a TOM.

To Conclude

An achievable Target Operating Model, that delivers, is dependant on the execution of an overall business transformation strategy that aligns the business’ vision, mission and strategy with a future desired state in which the business should function.

Part of the TOM is this Business Transformation Model that outlines the transformation programme plan, which functionally syncs the current with the future operating states. It also outlines the execution phases required to deliver the desired outcomes, in the right place at the right time, while having the agility to continuously adapt to changes.

Only if an organisation has a strategically aligned and agile Target Operating Model in place that can achieve this, is the business in a position to successfully navigate its journey to the benefits and value growth it desires.

renierbotha Ltd has a demonstrable track record of compiling and delivering visionary Target Operating Models.

If you know that your business has to transform to stay relevant – Get in touch!

 

Originally written by Renier Botha in 2016 when, as Managing Director, he was pivotal in delivering the TOM for Systems Powering Healthcare Ltd.

How to Innovate to stay Relevant

Staying relevant! The biggest challenge we all face – staying relevant within our market. Relevance to your customers is what keeps you in business.

With the world changing as rapidly as it does today, mainly due to the profound influence of technology on our lives, the expectations of the consumer is changing at pace. They have access to an increasing array of choice, not just in how they spend their money but also in how they are communicating and interacting – change fueled by a digital revolution. The last thing that anyone can afford, in this fast paced race, is losing relevance – that will cost us customers or worse…

Is what you are selling today, adaptable to the continuous changing ecosystems? Does your strategy reflect that agility? How can you ensure that your business stays relevant in the digital age? We have all heard about digital transformation as a necessity, but even then, how can you ensure that you are evolving as fast as your customers and stay relevant within your market?

Business, who has a culture of continuous evolvement, aligning their products and services with the digital driven customer, is the business that stays relevant. This is the kind of business that does not require a digital transformation to realign with customer’s demand to secure their future. A customer centric focus and a culture of continuous evolution within the business, throughout the business value chain, is what assure relevance. Looking at these businesses, their ability/agility to get innovation into production, rapidly, is a core success criterion.

Not having a strategy to stay relevant is a very high and real risk to business. Traditionally we deal with risk by asking “Why?”. For continuous improvement/evolution and agility, we should instead be asking “Why not?” and by that, introduce opportunities for pilots, prototypes, experimentation and proof of concepts. Use your people as an incubator for innovation.

Sure, you have a R&D team and you are continuously finding new ways to deliver your value proposition – but getting your innovative ideas into production is cumbersome, just to discover that it is already aged and possibly absolute in a year a two. R&D is expensive and time consuming and there are no guarantees that your effort will result in a working product or desired service. Just because you have the ability to build something, does not mean that you have to build something. Focusing the scares and expensive resources on the right initiatives makes sense, right! This is why many firms are shifting from a project-minded (short term) approach to a longer-term product-minded investment and management approach.

So, how do you remain customer centric, use your staff as incubators of innovation, select the ideas that will improve your market relevance and then rapidly develop those ideas into revenue earners while shifting to a product-minded investment approach?

You could combine Design Thinking with Lean Startup and Agile Delivery…

In 2016, I was attending the Gartner Symposium where Gartner brought these concepts together very well in this illustration:

Gartner - Design-Lean-Agile 2

Instead of selecting and religiously follow one specific delivery methodology, use the best of multiple worlds to get the optimum output through the innovation lifecycle.

Design-Lean-Agile 1

Using Design Thinking (Empathise >> Define >> Ideate >> Prototype) puts the customer at the core of customer centric innovation and product/service development. Starting by empathising with the customers and defining their most pressing issues and problems, before coming up with a variety of ideas to potentially solve the problems. Each idea is considered before developing a prototype. This dramatically reduces the risk of innovation initiatives, by engaging with what people (the customer) really need and want before actually investing further in development.

Lean Startup focuses on getting a product market fit, by moving a Prototype or MVP (minimum viable product) through a cycle of Build >> Measure >> Learn. This ensures a thorough knowledge of the user of the product/service is gained through an active and measureable engagement with the customer. Customer experience and feedback is captured and used to learn and adapt resulting in an improved MVP, better aligned to the target market, after every cycle.

Finally Agile Scrum, continuing the customer centric theme, involves multiple stakeholders, especially users (customers), in every step in maturing the MVP to a product they will be happy to use. This engagement enhances transparency, which in turn grow the trust between the business (Development Team) and the customer (user) who are vested in the product’s/service’s success. Through an iterative approach, new features and changes can be delivered in an accurate and predictable timeline quickly and according to stakeholder’s priorities. This continuous product/service evolvement, with full stakeholder engagement, builds brand loyalty and ensures market relevance.

Looking at a typical innovation lifecycle you could identify three distinct stages: Idea, Prototype/MVP (Minimal Viable Product) and Product. Each of these innovation stages are complimented by some key value, gained from one of the three delivery methodologies:

Design-Lean-Agile 2

All of these methodologies, engage the stakeholders (especially the customer & user) in continuous feedback loops, measuring progress and capturing feedback to adapt and continuously improve, so maximum value creation is achieved.

No one wants to spend a lot of resource and time delivering something that adds little value and create no impact. Using this innovation methodology and associated tools, you will be building better products and service, in the eye of the user – and that’s what matters. You’ll be actively building and unlocking the potential of you’re A-team, to be involved in creating impact and value while cultivating a culture of continuous improvement.

The same methodology works very well for digital transformation programmes.

At the very least, you should be experimenting with these delivery approaches to find the sweat spot methodology for you.

Experiment to stay relevant!

Let’s Talk – renierbotha.com – Are you looking to develop an innovation strategy to be more agile and stay relevant? Do you want to achieve your goals faster? Create better business value? Build strategies to improve growth?

We can help – make contact!

Read similar articles for further insight in our Blog.