opensight

digitalization

Lean, Versatile and Reliable Tech Stack
Fast Innovation and Culture
Data-Driven Decision Making

Our approach to digitalization

At opensight, our approach and philosophy towards digitization are anchored in the paradigm: every company is a software company

Philippe Braxmeier

Philippe Braxmeier

"Every company is a software company"

Companies are realizing that to compete and grow in a digital world, they must look, think, and act like software companies themselves. Many enterprises engage in data and data processing activities, necessitating the utilization and/or construction of a comprehensive software stack.

Our approach to digitalization is anchored in the following 3 core principles:

Lean, Versatile and Reliable Tech Stack

Build a modular, scalable, portable cloud native tech stack that ensures stability while enabling quick pivots to new technologies or business needs.

Fast Innovation and Culture

Devops Principles Foster collaboration, automation, and continuous delivery to accelerate development cycles and improve product quality.

Data-Driven Decision Making

Harness observability technologies, data analytics and AI to drive informed decisions, optimize processes, and anticipate market shifts, ensuring adaptability and competitiveness.

Lean, Versatile and Reliable Tech Stack


At the core of our approach is the development of a lean, versatile and reliable tech stack.

  • Leanness and Clarity
    Each component — whether it's a software tool, framework, or cloud service — should serve a clearly defined purpose and integrate seamlessly into the overall system. Avoid "tool sprawl", a common problem in many companies that drives up costs and complexity.
  • Modularity
    Monolithic legacy applications, which try to integrate many purposes into a single application, are not modular, not integratable and therefore a cluster risk for your business. Modularity allows individual components to be swapped or upgraded without jeopardizing the entire system. This creates not only flexibility but also cost-efficiency, enabling businesses to quickly adapt to new technologies or market demands without investing in a costly system overhaul.
  • Versatility, Scalability, Reliability
    A versatile tech stack is one that can be easily adapted to new technologies and business needs. It should be able to integrate with existing systems and infrastructure, and should be able to support new features and functionality. The cloud is a great enabler for this, as it allows you to scale up and down resources on demand for growing businesses.
We believe that a modular, scalable, and portable cloud-native tech stack is essential for ensuring stability while enabling quick pivots to new technologies or business needs.

  • Configuration Management Database (CMDB)
    A comprehensive Configuration Management Database (CMDB), which serves as a centralized, dynamic repository for critical metadata. The CMDB meticulously catalogs IT assets, configurations, and relationships, providing real-time visibility and control over the digital ecosystem. This enables streamlined operations, faster troubleshooting, and informed decision-making, ensuring our architecture remains both resilient and agile in the face of change.

Configuration Management Database (CMDB)

In today's complex digital environments, organizations need complete visibility into their IT infrastructure to ensure reliable operations and support digital transformation initiatives. A Configuration Management Database (CMDB) serves as the single source of truth for all configuration items, their relationships, and dependencies.

Philippe Braxmeier

Philippe Braxmeier

"A well-maintained CMDB is the foundation for effective IT service management, incident response, and change management. It's not just about tracking assets—it's about understanding how everything connects and impacts your business."

As organizations embrace cloud-native architectures, microservices, and distributed systems, the need for comprehensive configuration management becomes even more critical. A modern CMDB should provide:

  • Real-time Visibility: Track servers, applications, networks, and their interdependencies
  • Change Management: Understand the impact of changes across your infrastructure
  • Incident Response: Quickly identify affected systems and their relationships
  • Compliance & Governance: Maintain audit trails and ensure regulatory compliance
  • Automation Support: Enable infrastructure as code and automated deployments

Fast Innovation and Culture


A corner stone of our approach to digitalization is the capability to deliver working software fast. We believe that the constant delivery of working software is the primary measure of progress. Adopt the 3 principles of Devops:

  • 1. Constant Flow of Work
    Modern Application Architecture - Fast Release Cycles, Fast Time To Market
  • 2. Feedback Loops
    Site Reliability Engineering - Measuring Performance. Providing Stable and Reliable Software
  • 3. Constant Improvement
    Experimentation - Improvement, Innovation, Interaction

Our approach to digitalization is greatly influenced by the highly acclaimed book "The Phoenix Project," which we highly recommend.
The Phoenix Project - Gene Kim

1. Constant Flow of Work

Roman Huesler

Roman Hüsler

"The constant delivery of working software is the primary measure of progress. Modern Application Architectures facilitate swift release cycles and rapid time to market for your new procuts features."

How does your business undergo transformation through digitization, what can we learn from other software companies? We've had the privilege of witnessing significant technological shifts over the years.

For software companies, the ability to swiftly respond to evolving customer needs and create innovative offerings is crucial. Ultimately, this necessitates achieving a delicate balance between rapid release cycles on one hand and maintaining a stable and reliable software operation on the other. Delivering ten new software releases per day while ensuring a consistent and dependable operational environment was unattainable with traditional software architectures. We need a software architecture that enables a constant "flow of work", essentially a pipeline for effortless and continuous integration of new features from development into production. Let's examine the transformations that have occurred in the past two decades concerning software architectures.

Development, Culture
Architecture
Deployment
Infrastructure
Cloud Native

Devops

Microservices

Containers

Hyperscaler
Cloud Ready

Agile

N-Tier

Server Virtualisation

Hosted IaaS
Legacy

Waterfall

Monolothic

Physical Server

Data Center

Data-Driven Decision Making

Systems Monitoring

Systems Monitoring provides not just Feedback on wether a system works, it can also provide valuable business insights and insight into long term trends. Also it is a crucial part in every devops pipeline.

Roman Huesler

Roman Hüsler

"If you're blind to what's happening, you can't be reliable. Your monitoring system should address two questions: what's broken, and why?"

In order to monitor the health of distributed systems we focus especially on "the four golden signals" mentioned in the google sre handbook. Click on each section of the circle below to learn more about each signal:

⏱️

Latency

Latency measures the time it takes to service a request. This includes both the time to process the request and the time to return the response. Monitoring latency helps identify performance bottlenecks, user experience issues, and capacity problems. Key metrics include p50, p95, and p99 percentiles to understand both typical and worst-case performance scenarios.

🚦

Traffic

Traffic represents the demand placed on your system, typically measured in requests per second (RPS), queries per second (QPS), or concurrent users. Understanding traffic patterns helps with capacity planning, identifying usage trends, and detecting anomalies that might indicate issues or attacks. Traffic monitoring is crucial for autoscaling decisions and resource allocation.

Errors

Errors track the rate of requests that fail, including HTTP 5xx errors, timeouts, and other failure conditions. Monitoring error rates helps identify system failures, bugs, and infrastructure issues. It's important to distinguish between different types of errors (client vs server) and track error rates relative to total traffic to understand the impact on user experience.

📊

Saturation

Saturation measures how "full" your service is, indicating resource utilization levels. This includes CPU usage, memory consumption, disk I/O, network bandwidth, and other resource constraints. Saturation monitoring helps predict when systems will become overloaded and enables proactive scaling. It's crucial for maintaining system performance and preventing cascading failures.

Observability: The Three Pillars of Observability

While monitoring tells us when something is wrong, observability helps us understand why it's happening. It's the ability to understand the internal state of a system by examining its outputs. Observability is built on three fundamental pillars that work together to provide comprehensive system insights.

Philippe

Philippe

"Observability isn't just about collecting data—it's about asking the right questions. When you can trace a user's journey from frontend to database and back, you unlock insights that monitoring alone can never provide."

Modern distributed systems require more than just monitoring. Observability provides the depth and context needed to understand complex interactions, debug issues efficiently, and make data-driven decisions. Explore the three pillars below:

📊

Metrics

Metrics are numerical measurements collected over time. They provide quantitative data about system performance, resource utilization, and business KPIs. Key metric types include counters (requests, errors), gauges (memory usage, active connections), and histograms (response times, request sizes). Metrics enable trend analysis, alerting, and capacity planning by providing aggregated views of system behavior.

🔍

Logs

Logs are timestamped records of discrete events that occurred in your system. They provide detailed context about what happened, when it happened, and often why it happened. Structured logging with consistent formats enables powerful querying and analysis. Logs are essential for debugging, audit trails, and understanding user behavior patterns. They should include relevant context like user IDs, request IDs, and error details.

🕸️

Traces

Traces show the path of requests as they flow through your distributed system. They connect related events across multiple services, databases, and external APIs. Distributed tracing reveals performance bottlenecks, dependency relationships, and failure points in complex architectures. Traces help answer questions like "Why is this request slow?" by showing exactly where time is spent across the entire request lifecycle.

Observability in Practice

Implementing observability requires more than just collecting data—it requires thoughtful instrumentation, correlation between different data sources, and tools that enable exploration and analysis. The goal is to reduce mean time to detection (MTTD) and mean time to resolution (MTTR) by providing the context needed to quickly understand and resolve issues.