Multiple Data Sources for the Same Data
It is quite common for different teams or sub-orgs to use different tools. For example, one team might manage their tasks in Jira, another uses Asana, and a third uses GitLab. Another example is a company with multiple instances of the same tool.
Normalization is very simple in these cases. The Faros AI connectors normalize the data upon ingestion, automatically mapping corresponding data types to the right place in our canonical schema.
Same Tool, Different Workflows
Another common scenario is projects within a single tool, like Jira, that use different workflows, as expressed by statuses. Faros AI automatically deals with status transitions and provides the desired breakdowns based on the level of analysis:
- Each team's particular workflow will be represented in its metrics, so team members can understand their bottlenecks, learn, and affect change where needed.
- At the leadership level, where we’re zoomed out to team-of-teams or much larger groups, metrics will be abstracted to the common statuses of To Do, In Progress, and Done. This is sufficient to see the bottom line metrics leaders care about, like task cycle time and amount of work in progress.
Same Tool, Different Usage
Every team in your organization might be using Jira, but they’re using it very differently. Normalization is required to report effectively across this variance in tool usage.
The Faros AI approach is to be compatible with how people work today, especially at the very beginning of your program. To that end, the data normalization can be handled in a couple of ways:
- By building conditions into the chart queries. For example, let’s imagine you want to look at all high-priority unassigned issues. One team may use P0 and P1, while another uses Critical and High. A custom query can bake these different definitions into the chart.
- By using the platform’s data transform capabilities. For example, one group uses epics to track initiatives, while another group uses tags on tasks, and a third group uses a custom issue type. Faros AI transforms the data to the initiative portion of our schema, so you can then query all the initiatives in a single way.
At some point, if the maintenance of the queries or transforms becomes too complex and error-prone, Faros AI recommends introducing a few standard options. You don’t force everyone to comply to the same behavior, rather to select one of a handful of approved ways of doing things. This should cover the majority of team preferences while keeping the in-tool configurations manageable.
Different Goals for Different Teams
Let’s face it: Good and bad are relative.
Consider one product under active development and another product that is in maintenance mode. While you may want to measure the same things for these teams — for example, throughput — they will have different definitions of good. “Good” is also relative to a baseline, and their starting points may be wildly different.
The Faros AI approach is to make it easy for every role to easily understand how teams are performing relative to contextual goals.
- Teams can customize their thresholds for great, good, medium, and bad. These custom thresholds will be utilized for their personalized dashboards featuring team-level metrics and insights.
- Leaders will get a bird’s-eye view at the organizational level that takes all the personalized thresholds into account and visually identifies hotspots. It will also call out areas of improvement or decline.
Note: Popular frameworks like DORA publish annual benchmarks, but the way the metric is defined might not be applicable to how you work. For example, Deployment Frequency measures how often you deploy code changes to production. If your organization has a major product release four times a year, strict adherence to that definition won’t give you the insight you seek. In this example, Faros AI recommends measuring deployment frequency to your pre-prod environments, with the caveat noted and internally understood.
Basic Data Validation
As Faros AI begins to ingest and normalize data, it will identify gaps in data collection and mapping. Through troubleshooting and cleanup, you can address and fix these errors. In addition, charts can be tweaked, for example, to use median instead of average.
During this phase, you may also discover places where processes are not being followed internally and need to address the issue with the relevant teams.
- Faros AI produces a report that shows data gaps, including boards without an owner and teams without data.
- Faros AI highlights anomalies and outliers, primarily found in human-curated data, like tasks that have been open for hundreds of days or tasks that moved directly from ToDo to Done.
Once your data has been validated, you’ve paved the way to the next stage of analyzing it.