The Epiphany Moment of Euphoria in a Data Estate Development Project

In our technology-driven world, engineers pave the path forward, and there are moments of clarity and triumph that stand comparable to humanity’s greatest achievements. Learning at a young age from these achievements shape our way of thinking and can be a source of inspiration that enhances the way we solve problems in our daily lives. For me, one of these profound inspirations stems from an engineering marvel: the Paul Sauer Bridge over the Storms River in Tsitsikamma, South Africa – which I first visited in 1981. This arch bridge, completed in 1956, represents more than just a physical structure. It embodies a visionary approach to problem-solving, where ingenuity, precision, and execution converge seamlessly.

The Paul Sauer Bridge across the Storms River Gorge in South Africa.

The bridge’s construction involved a bold method: engineers built two halves of the arch on opposite sides of the gorge. Each section was erected vertically and then carefully pivoted downward to meet perfectly in the middle, completing the 100m span, 120m above the river. This remarkable feat of engineering required foresight, meticulous planning, and flawless execution – a true epiphany moment of euphoria when the pieces fit perfectly.

Now, imagine applying this same philosophy to building data estate solutions. Like the bridge, these solutions must connect disparate sources, align complex processes, and culminate in a seamless result where data meets business insights.

This blog explores how to achieve this epiphany moment in data projects by drawing inspiration from this engineering triumph.

The Parallel Approach: Top-Down and Bottom-Up

Building a successful data estate solution, I believe requires a dual approach, much like the simultaneous construction of both sides of the Storms River Bridge:

  1. Top-Down Approach:
    • Start by understanding the end goal: the reports, dashboards, and insights that your organization needs.
    • Focus on business requirements such as wireframe designs, data visualization strategies, and the decisions these insights will drive.
    • Use these goals to inform the types of data needed and the transformations required to derive meaningful insights.
  2. Bottom-Up Approach:
    • Begin at the source: identifying and ingesting the right raw data from various systems.
    • Ensure data quality through cleaning, validation, and enrichment.
    • Transform raw data into structured and aggregated datasets that are ready to be consumed by reports and dashboards.

These two streams work in parallel. The Top-Down approach ensures clarity of purpose, while the Bottom-Up approach ensures robust engineering. The magic happens when these two streams meet in the middle – where the transformed data aligns perfectly with reporting requirements, delivering actionable insights. This convergence is the epiphany moment of euphoria for every data team, validating the effort invested in discovery, planning, and execution.

When the Epiphany Moment Isn’t Euphoric

While the convergence of Top-Down and Bottom-Up approaches can lead to an epiphany moment of euphoria, there are times when this anticipated triumph falls flat. One of the most common reasons is discovering that the business requirements cannot be met as the source data is insufficient, incomplete, or altogether unavailable to meet the reporting requirements. These moments can feel like a jarring reality check, but they also offer valuable lessons for navigating data challenges.

Why This Happens

  1. Incomplete Understanding of Data Requirements:
    • The Top-Down approach may not have fully accounted for the granular details of the data needed to fulfill reporting needs.
    • Assumptions about the availability or structure of the data might not align with reality.
  2. Data Silos and Accessibility Issues:
    • Critical data might reside in silos across different systems, inaccessible due to technical or organizational barriers.
    • Ownership disputes or lack of governance policies can delay access.
  3. Poor Data Quality:
    • Data from source systems may be incomplete, outdated, or inconsistent, requiring significant remediation before use.
    • Legacy systems might not produce data in a usable format.
  4. Shifting Requirements:
    • Business users may change their reporting needs mid-project, rendering the original data pipeline insufficient.

The Emotional and Practical Fallout

Discovering such issues mid-development can be disheartening:

  • Teams may feel a sense of frustration, as their hard work in data ingestion, transformation, and modeling seems wasted.
  • Deadlines may slip, and stakeholders may grow impatient, putting additional pressure on the team.
  • The alignment between business and technical teams might fracture as miscommunications come to light.

Turning Challenges into Opportunities

These moments, though disappointing, are an opportunity to re-evaluate and recalibrate your approach. Here are some strategies to address this scenario:

1. Acknowledge the Problem Early

  • Accept that this is part of the iterative process of data projects.
  • Communicate transparently with stakeholders, explaining the issue and proposing solutions.

2. Conduct a Gap Analysis

  • Assess the specific gaps between reporting requirements and available data.
  • Determine whether the gaps can be addressed through technical means (e.g., additional ETL work) or require changes to reporting expectations.

3. Explore Alternative Data Sources

  • Investigate whether other systems or third-party data sources can supplement the missing data.
  • Consider enriching the dataset with external or public data.

4. Refine the Requirements

  • Work with stakeholders to revisit the original reporting requirements.
  • Adjust expectations to align with available data while still delivering value.

5. Enhance Data Governance

  • Develop clear ownership, governance, and documentation practices for source data.
  • Regularly audit data quality and accessibility to prevent future bottlenecks.

6. Build for Scalability

  • Future-proof your data estate by designing modular pipelines that can easily integrate new sources.
  • Implement dynamic models that can adapt to changing business needs.

7. Learn and Document the Experience

  • Treat this as a learning opportunity. Document what went wrong and how it was resolved.
  • Use these insights to improve future project planning and execution.

The New Epiphany: A Pivot to Success

While these moments may not bring the euphoria of perfect alignment, they represent an alternative kind of epiphany: the realisation that challenges are a natural part of innovation. Overcoming these obstacles often leads to a more robust and adaptable solution, and the lessons learned can significantly enhance your team’s capabilities.

In the end, the goal isn’t perfection – it’s progress. By navigating the difficulties of misalignment, incomplete or unavailable data with resilience and creativity, you’ll lay the groundwork for future successes and, ultimately, more euphoric epiphanies to come.

Steps to Ensure Success in Data Projects

To reach this transformative moment, teams must adopt structured practices and adhere to principles that drive success. Here are the key steps:

1. Define Clear Objectives

  • Identify the core business problems you aim to solve with your data estate.
  • Engage stakeholders to define reporting and dashboard requirements.
  • Develop a roadmap that aligns with organisational goals.

2. Build a Strong Foundation

  • Invest in the right infrastructure for data ingestion, storage, and processing (e.g., cloud platforms, data lakes, or warehouses).
  • Ensure scalability and flexibility to accommodate future data needs.

3. Prioritize Data Governance

  • Implement data policies to maintain security, quality, and compliance.
  • Define roles and responsibilities for data stewardship.
  • Create a single source of truth to avoid duplication and errors.

4. Embrace Parallel Development

  • Top-Down: Start designing wireframes for reports and dashboards while defining the key metrics and KPIs.
  • Bottom-Up: Simultaneously ingest and clean data, applying transformations to prepare it for analysis.
  • Use agile methodologies to iterate and refine both streams in sync.

5. Leverage Automation

  • Automate data pipelines for faster and error-free ingestion and transformation.
  • Use tools like ETL frameworks, metadata management platforms, and workflow orchestrators.

6. Foster Collaboration

  • Establish a culture of collaboration between business users, analysts, and engineers.
  • Encourage open communication to resolve misalignments early in the development cycle.

7. Test Early and Often

  • Validate data accuracy, completeness, and consistency before consumption.
  • Conduct user acceptance testing (UAT) to ensure the final reports meet business expectations.

8. Monitor and Optimize

  • After deployment, monitor the performance of your data estate.
  • Optimize processes for faster querying, better visualization, and improved user experience.

Most Importantly – do not forget that the true driving force behind technological progress lies not just in innovation but in the people who bring it to life. Investing in the right individuals and cultivating a strong, capable team is paramount. A team of skilled, passionate, and collaborative professionals forms the backbone of any successful venture, ensuring that ideas are transformed into impactful solutions. By fostering an environment where talent can thrive – through mentorship, continuous learning, and shared vision – organisations empower their teams to tackle complex challenges with confidence and creativity. After all, even the most groundbreaking technologies are only as powerful as the minds and hands that create and refine them.

Conclusion: Turning Vision into Reality

The Storms River Bridge stands as a symbol of human achievement, blending design foresight with engineering excellence. It teaches us that innovation requires foresight, collaboration, and meticulous execution. Similarly, building a successful data estate solution is not just about connecting systems or transforming data – it’s about creating a seamless convergence where insights meet business needs. By adopting a Top-Down and Bottom-Up approach, teams can navigate the complexities of data projects, aligning technical execution with business needs.

When the two streams meet – when your transformed data delivers perfectly to your reporting requirements – you’ll experience your own epiphany moment of euphoria. It’s a testament to the power of collaboration, innovation, and relentless dedication to excellence.

In both engineering and technology, the most inspiring achievements stem from the ability to transform vision into reality. The story of the Paul Sauer Bridge teaches us that innovation requires foresight, collaboration, and meticulous execution. Similarly, building a successful data estate solution is not just about connecting systems or transforming data, it’s about creating a seamless convergence where insights meet business needs.

The journey isn’t always smooth. Challenges like incomplete data, shifting requirements, or unforeseen obstacles can test our resilience. However, these moments are an opportunity to grow, recalibrate, and innovate further. By adopting structured practices, fostering collaboration, and investing in the right people, organizations can navigate these challenges effectively.

Ultimately, the epiphany moment in data estate development is not just about achieving alignment, it’s about the collective people effort, learning, and perseverance that make it possible. With a clear vision, a strong foundation, and a committed team, you can create solutions that drive success and innovation, ensuring that every challenge becomes a stepping stone toward greater triumphs.

C4 Architecture Model – Detailed Explanation

The C4 model, developed by Simon Brown, is a framework for visualizing software architecture at various levels of detail. It emphasizes the use of hierarchical diagrams to represent different aspects and views of a system, providing a comprehensive understanding for various stakeholders. The model’s name, C4, stands for Context, Containers, Components, and Code, each representing a different level of architectural abstraction.

Levels of the C4 Model

1. Context (Level 1)

Purpose: To provide a high-level overview of the system and its environment.

  • The System Context diagram is a high-level view of your software system.
  • It shows your software system as the central part, and any external systems and users that your system interacts with.
  • It should be technology agnostic, and the focus on the people and software systems instead of low-level details.
  • The intended audience for the System Context Diagram is everybody. If you can show it to non-technical people and they are able to understand it, then you know you’re on the right track.

Key Elements:

  • System: The primary system under consideration.
  • External Systems: Other systems that the primary system interacts with.
  • Users: Human actors or roles that interact with the system.

Diagram Features:

  • Scope: Shows the scope and boundaries of the system within its environment.
  • Relationships: Illustrates relationships between the system, external systems, and users.
  • Simplification: Focuses on high-level interactions, ignoring internal details.

Example: An online banking system context diagram might show:

  • The banking system itself.
  • External systems like payment gateways, credit scoring agencies, and notification services.
  • Users such as customers, bank employees, and administrators.

More Extensive Detail:

  • Primary System: Represents the main application or service being documented.
  • Boundaries: Defines the limits of what the system covers.
  • Purpose: Describes the main functionality and goals of the system.
  • External Systems: Systems outside the primary system that interact with it.
  • Dependencies: Systems that the primary system relies on for specific functionalities (e.g., third-party APIs, external databases).
  • Interdependencies: Systems that rely on the primary system (e.g., partner applications).
  • Users: Different types of users who interact with the system.
  • Roles: Specific roles that users may have, such as Admin, Customer, Support Agent.
  • Interactions: The nature of interactions users have with the system (e.g., login, data entry, report generation).

2. Containers (Level 2)

When you zoom into one software system, you get to the Container diagram.

Purpose: To break down the system into its major containers, showing their interactions.

  • Your software system is comprised of multiple running parts – containers.
  • A container can be a:
    • Web application
    • Single-page application
    • Database
    • File system
    • Object store
    • Message broker
  • You can look at a container as a deployment unit that executes code or stores data.
  • The Container diagram shows the high-level view of the software architecture and the major technology choices.
  • The Container diagram is intended for technical people inside and outside of the software development team:
    • Operations/support staff
    • Software architects
    • Developers

Key Elements:

  • Containers: Executable units or deployable artifacts (e.g., web applications, databases, microservices).
  • Interactions: Communication and data flow between containers and external systems.

Diagram Features:

  • Runtime Environment: Depicts the containers and their runtime environments.
  • Technology Choices: Shows the technology stacks and platforms used by each container.
  • Responsibilities: Describes the responsibilities of each container within the system.

Example: For the online banking system:

  • Containers could include a web application, a mobile application, a backend API, and a database.
  • The web application might interact with the backend API for business logic and the database for data storage.
  • The mobile application might use a different API optimized for mobile clients.

More Extensive Detail:

  • Web Application:
    • Technology Stack: Frontend framework (e.g., Angular, React), backend language (e.g., Node.js, Java).
    • Responsibilities: User interface, handling user requests, client-side validation.
  • Mobile Application:
    • Technology Stack: Native (e.g., Swift for iOS, Kotlin for Android) or cross-platform (e.g., React Native, Flutter).
    • Responsibilities: User interface, handling user interactions, offline capabilities.
  • Backend API:
    • Technology Stack: Server-side framework (e.g., Spring Boot, Express.js), programming language (e.g., Java, Node.js).
    • Responsibilities: Business logic, data processing, integrating with external services.
  • Database:
    • Technology Stack: Type of database (e.g., SQL, NoSQL), specific technology (e.g., PostgreSQL, MongoDB).
    • Responsibilities: Data storage, data retrieval, ensuring data consistency and integrity.

3. Components (Level 3)

Next you can zoom into an individual container to decompose it into its building blocks.

Purpose: To further decompose each container into its key components and their interactions.

  • The Component diagram show the individual components that make up a container:
    • What each of the components are
    • The technology and implementation details
  • The Component diagram is intended for software architects and developers.

Key Elements:

  • Components: Logical units within a container, such as services, modules, libraries, or APIs.
  • Interactions: How these components interact within the container.

Diagram Features:

  • Internal Structure: Shows the internal structure and organization of each container.
  • Detailed Responsibilities: Describes the roles and responsibilities of each component.
  • Interaction Details: Illustrates the detailed interaction between components.

Example: For the backend API container of the online banking system:

  • Components might include an authentication service, an account management module, a transaction processing service, and a notification handler.
  • The authentication service handles user login and security.
  • The account management module deals with account-related operations.
  • The transaction processing service manages financial transactions.
  • The notification handler sends alerts and notifications to users.

More Extensive Detail:

  • Authentication Service:
    • Responsibilities: User authentication, token generation, session management.
    • Interactions: Interfaces with the user interface components, interacts with the database for user data.
  • Account Management Module:
    • Responsibilities: Managing user accounts, updating account information, retrieving account details.
    • Interactions: Interfaces with the authentication service for user validation, interacts with the transaction processing service.
  • Transaction Processing Service:
    • Responsibilities: Handling financial transactions, validating transactions, updating account balances.
    • Interactions: Interfaces with the account management module, interacts with external payment gateways.
  • Notification Handler:
    • Responsibilities: Sending notifications (e.g., emails, SMS) to users, managing notification templates.
    • Interactions: Interfaces with the transaction processing service to send transaction alerts, interacts with external notification services.

4. Code (Level 4)

Finally, you can zoom into each component to show how it is implemented with code, typically using a UML class diagram or an ER diagram.

Purpose: To provide detailed views of the codebase, focusing on specific components or classes.

  • This level is rarely used as it goes into too much technical detail for most use cases. However, there are supplementary diagrams that can be useful to fill in missing information by showcasing:
    • Sequence of events
    • Deployment information
    • How systems interact at a higher level
  • It’s only recommended for the most important or complex components.
  • Of course, the target audience are software architects and developers.

Key Elements:

  • Classes: Individual classes, methods, or functions within a component.
  • Relationships: Detailed relationships like inheritance, composition, method calls, or data flows.

Diagram Features:

  • Detailed Code Analysis: Offers a deep dive into the code structure and logic.
  • Code-Level Relationships: Illustrates how classes and methods interact at a code level.
  • Implementation Details: Shows specific implementation details and design patterns used.

Example: For the transaction processing service in the backend API container:

  • Classes might include Transaction, TransactionProcessor, Account, and NotificationService.
  • The TransactionProcessor class might have methods for initiating, validating, and completing transactions.
  • Relationships such as TransactionProcessor calling methods on the Account class to debit or credit funds.

More Extensive Detail:

  • Transaction Class:
    • Attributes: transactionId, amount, timestamp, status.
    • Methods: validate(), execute(), rollback().
    • Responsibilities: Representing a financial transaction, ensuring data integrity.
  • TransactionProcessor Class:
    • Attributes: transactionQueue, auditLog.
    • Methods: processTransaction(transaction), validateTransaction(transaction), completeTransaction(transaction).
    • Responsibilities: Processing transactions, managing transaction flow, logging transactions.
  • Account Class:
    • Attributes: accountId, balance, accountHolder.
    • Methods: debit(amount), credit(amount), getBalance().
    • Responsibilities: Managing account data, updating balances, providing account information.
  • NotificationService Class:
    • Attributes: notificationQueue, emailTemplate, smsTemplate.
    • Methods: sendEmailNotification(recipient, message), sendSMSNotification(recipient, message).
    • Responsibilities: Sending notifications to users, managing notification templates, handling notification queues.

Benefits of the C4 Model

  • Clarity and Focus:
    • Provides a clear separation of concerns by breaking down the system into different levels of abstraction.
    • Each diagram focuses on a specific aspect, avoiding information overload.
  • Consistency and Standardization:
    • Offers a standardized approach to documenting architecture, making it easier to maintain consistency across diagrams.
    • Facilitates comparison and review of different systems using the same visual language.
  • Enhanced Communication:
    • Improves communication within development teams and with external stakeholders by providing clear, concise, and visually appealing diagrams.
    • Helps in onboarding new team members by offering an easy-to-understand representation of the system.
  • Comprehensive Documentation:
    • Ensures comprehensive documentation of the system architecture, covering different levels of detail.
    • Supports various documentation needs, from high-level overviews to detailed technical specifications.

Practical Usage of the C4 Model

  • Starting with Context:
    • Begin with a high-level context diagram to understand the system’s scope, external interactions, and primary users.
    • Use this diagram to set the stage for more detailed diagrams.
  • Defining Containers:
    • Break down the system into its major containers, showing how they interact and are deployed.
    • Highlight the technology choices and responsibilities of each container.
  • Detailing Components:
    • For each container, create a component diagram to illustrate the internal structure and interactions.
    • Focus on how functionality is divided among components and how they collaborate.
  • Exploring Code:
    • If needed, delve into the code level for specific components to provide detailed documentation and analysis.
    • Use class or sequence diagrams to show detailed code-level relationships and logic.

Example Scenario: Online Banking System

Context Diagram:

  • System: Online Banking System
  • External Systems: Payment Gateway, Credit Scoring Agency, Notification Service
  • Users: Customers, Bank Employees, Administrators
  • Description: Shows how customers interact with the banking system, which in turn interacts with external systems for payment processing, credit scoring, and notifications.

Containers Diagram:

  • Containers: Web Application, Mobile Application, Backend API, Database
  • Interactions: The web application and mobile application interact with the backend API. The backend API communicates with the database and external systems.
  • Technology Stack: The web application might be built with Angular, the mobile application with React Native, the backend API with Spring Boot, and the database with PostgreSQL.

Components Diagram:

  • Web Application Components: Authentication Service, User Dashboard, Transaction Module
  • Backend API Components: Authentication Service, Account Management Module, Transaction Processing Service, Notification Handler
  • Interactions: The Authentication Service in both the web application and backend API handles user authentication and security. The Transaction Module in the web application interacts with the Transaction Processing Service in the backend API.

Code Diagram:

  • Classes: Transaction, TransactionProcessor, Account, NotificationService
  • Methods: The TransactionProcessor class has methods for initiating, validating, and completing transactions. The NotificationService class has methods for sending notifications.
  • Relationships: The TransactionProcessor calls methods on the Account class to debit or credit funds. It also calls the NotificationService to send transaction alerts.

Conclusion

The C4 model is a powerful tool for visualising and documenting software architecture. By providing multiple levels of abstraction, it ensures that stakeholders at different levels of the organisation can understand the system. From high-level overviews to detailed code analysis, the C4 model facilitates clear communication, consistent documentation, and comprehensive understanding of complex software systems.

Solution Design & Architecture (SD&A) – Consider this…

When it comes to the design and architecture of enterprise level software solutions, what comes to mind?

What is Solution Design & Architecture:

SolutionDesign and Architecture (SD&A) is an in-depth IT scoping and review process that bridges the gap between your current IT environments, technologies, and the customer and business needs in order to deliver maximum return-on-investment. A proper design and architecture document also documents the approach, methodology and required steps to deliver the solution.

SD&A are actually two distinct disciplines. Solution Architect’s, with a balanced mixed of technical and business skills, write up the technical design of an environment and work out how to achieve a solution from a technical perspective. Solution Designers put the solution together and price it up based from assistance from the architect.

A solutions architect needs significant people and process skills. They are often in front of management, trying to explain a complex problem in laymen’s terms. They have to find ways to say the same thing using different words for different types of audiences, and they also need to really understand the business’ processes in order to create a cohesive vision of a usable product.

Solution Architect focuses on: 

  • market opportunity
  • technology and requirements
  • business goals
  • budget
  • project timeline
  • resourcing
  • ROI
  • how technology can be used to solve a given business problem 
  • which framework, platform, or tech-stack can be used to create a solution 
  • how the application will look, what the modules will be, and how they interact with each other 
  • how things will scale for the future and how they will be maintained 
  • figuring out the risk in third-party frameworks/platforms 
  • finding a solution to a business problem

Here are some of the main responsibilities of a solutions architect:

Ultimately, the Solution Architect is responsible for the vision that underlies the solution and the execution of that vision into the solution.

  • Creates and leads the process of integrating IT systems for them to meet an organization’s requirements.
  • Conducts a system architecture evaluation and collaborates with project management and IT development teams to improve the architecture.
  • Evaluates project constraints to find alternatives, alleviate risks, and perform process re-engineering if required.
  • Updates stakeholders on the status of product development processes and budgets.
  • Notifies stakeholders about any issues connected to the architecture.
  • Fixes technical issues as they arise.
  • Analyses the business impact that certain technical choices may have on a client’s business processes.
  • Supervises and guides development teams.
  • Continuously researches emerging technologies and proposes changes to the existing architecture.

Solution Architecture Document:

The Solution Architecture provides an architectural description of a software solution and application. It describes the systems and it’s features based on the technical aspects, business goals, and integration points. It is intended to address a solution to the business needs and provides the foundation/map of the solution requirements driving the software build scope.

High level Benefits of Solution Architecture:

  • Builds a comprehensive delivery approach
  • Stakeholder alignment
  • Ensures a longer solution lifespan with the market
  • Ensures business ROI
  • Optimises the delivery scope and associated effectiveness
  • Easier and more organised implementation
  • Provides a good understanding of the overall development environment
  • Problems and associated solutions can be foreseen

Some aspects to consider:

When doing an enterprise level solution architecture, build and deployment, a few key aspects come to mind that should be build into the solution by design and not as an after thought…

  • Solution Architecture should a continuous part of the overall innovation delivery methodology – Solution Architecture is not a once-off exercise but is imbedded in the revolving SDLC. Cyclically evolve and deliver the solution with agility that can quickly adapt to business change with solution architecture forming the foundation (map and sanity check) before the next evolution cycle. Combine the best of several delivery methodologies to ensure optimum results in bringing the best innovation to revenue channels in the shortest possible timeframe. Read more on this subject here.
  • People – Ensure the right people with the appropriate knowledge, skills and abilities within the delivery team. Do not forget that people (users and customers) will use the system – not technologists.
  • Risk – as the solution architecture evolves, it will introduce technology and business risks that must be added to the project risk register and addressed to mitigation in accordance with the business risk appetite.
  • Choose the right software development tech stack that is well established and easily supported while scalable and powerful enough to deliver a feature rich solution that can be integrated into complex operational estates. Most tech-stacks has Solution Frameworks that outline key design options and decision when doing solution architecture. Choosing the right tech-stack is one of the most fundamental ways to future-proof the technology solution. You can read more on choosing the right tech stack here.
  • Modular approach – using a service oriented architecture (SOA) model to ensure the solution can be functionally scaled, up and down to align with feature required, by using independently functioning modules of macro and micro-services. Each service must be clearly defined with input, process, output parameters that aligns with the integration standard established for the platform. This SOA also assist in overall information security enhancements and fault finding in case something goes wrong. It also makes the developed platform more agile to adapt to continuous business environment and market changes with less overall impact and system changes.
  • Customer data at the heart of a solution – Be clear on Master vs Slave customer and data records and ensure the needed integration between master and slave data within inter-connecting systems and platforms, with the needed security applied to ensure privacy and data integrity. Establish a Single Customer and Data Views (single version of the truth) from the design off-set. Ensure personal identifiable data is handled within the solution according to the regulations as outlined in the Data Protection Act and recently introduced GDPR and data anonymisation and retention policy guidelines.
  • Platform Hosting & Infrastructure – What is the intended hosting framework, will it by private or public cloud, running in AWS or Azure – all important decisions that can drastically impact the solution architecture.
  • Scalability – who is the intended audience for the different modules and associated macro services within the solution – how many consecutive users, transactions, customer sessions, reports, dashboards, data imports & processing, data transfers, etc…? As required, ensure the solution architecture accommodate the capability for the system to monitor usage and automatically scale horizontally (more processing/data (hardware) nodes running in parallel without dropping user sessions) and vertically (adding more power to a hardware node).
  • Information and Cyber Security – A tiered architecture ensure physical differentiation between user and customer facing interfaces, system logic and processing algorithms and the storage components of a solution. Various security precautions, guidelines and best practices should be imbedded within the software development by design. This should be articulated within the solution architecture, infrastructure and service software code. Penetration Testing and the associated platform hardening requirements should feed back into the solution architecture enhancement as required.
  • Identity Management – Single Sign On (SSO) user management and application roles to assign access to different modules, features and functionality to user groups and individuals.
  • Integration – data exchange, multi-channel user interface, compute and storage components of the platform, how the different components inter-connects through secure connection with each other, other applications and systems (API and gateway) within the business operations estate and to external systems.
  • Customer Centric & Business Readiness – from a customer and end-user perspective what’s needed to ensure easy adoption (familiarity) and business ramp-up to establish a competent level of efficiency before the solution is deployed and go-live. UX, UI, UAT, Automated Regression Testing, Training Material, FAQs, Communication, etc…
  • Enterprise deployment – Involvement of all IT and business disciplines i.e. Business readiness (covered above), Network, Compute, Cyber Security, DevOps. Make sure non-functional Dev-Ops related requirements are covered in the same manner as
  • Application Support – Involve the support team during product build to ensure they have input and understanding of the solution to provide SLA driven support at to business and IT operations when the solution goes live. 
  • Business Continuity – what is required from an IT infrastructure and platform/solution capability perspective to ensure the system is always available (online) to enable continuous business operations?

Speak to Renier about your solution architecture requirements. With more than 20 years of enterprise technology product development experience, we can support your team toward delivery excellence.

Also Read: