Value Proposition Cards

A common problem in the world of software development is bringing Technical Debt to the forefront so that organizations can clearly see why they need to make the move towards Agile and DevOps. The issue is, how do we get everyone in the organization on board? Even when you explain to the business side that research has shown that organizations of all types and sizes are having to spend 70% to 80% of their entire Software Development budget simply maintaining existing code, they don’t believe this includes them or they simply don’t fully understand the issue. The problem is that they see the requests they make to Software Development moving through the Software Development process and believe this is all new work being created for their customers.

On the other hand, Software Development generally has a suspicion that they are probably taking on Technical Debt, but due to lack of time, resources or simply the lack of will they ignore it. Even when some Software Development groups are introduced to tools such as SonarQube and its Technical Debt plugin they often times choose to ignore it or are scared to see what it will show. The truth be told, you really can’t blame Software Development for wishing this subject would stay buried, because they are often times overwhelmed with their current work load and the last thing they want is perceived additional work.

I think a major reason for Technical Debt not having the focus it should have by all organizations is the lack of understanding of what the most important ROI in terms of their software products is. The most import ROI is life span of a product, why is this you ask? When we develop new software initially we have costs associated with resources to develop it, infrastructure, marketing etc, etc… and then we have continuing costs for maintenance, additional features, infrastructure, etc, etc. The only way to recoup these costs and have an acceptable ROI is for the product to have a long enough life span. We then must take into consideration what is one of the biggest factors in the life span of a product, its Technical Debt.

So how can we go about bringing Technical Debt to the forefront so that an organization can properly prioritize it and adopt remediation processes to deal with it (Agile and DevOps)? One possible way is through the use of “Value Proposition Cards”. In order to explain Value Proposition Cards, we are going to use Kanban in our example. When we create new cards for our Kanban board we would now start marking these cards as one of two types, a “value proposition card or a failure proposition card”. So, what constitutes a value proposition card or a failure proposition card? Well a value proposition card would be any work that we can clearly identify as adding say a new feature (cannot be a re-do of an existing feature), a new application or say framework changes that are not being done to fix Technical Debt or limiting factors of the existing framework. Failure Proposition cards include obvious things like bug fixes, feature changes due to not meeting the customer’s needs, Technical Debt or framework limitations. On top of marking the cards as Failure Proposition cards we would also do some root cause analysis so we could categorize why the card is marked as a failure proposition card, in order to justify why its considered a failure.

I wouldn’t recommend going to deep into the root cause analysis, we just want to be able to explain why its marked as a failure card and have justification when the organization objects. If we were to also say, make the Value Proposition cards blue and the Failure Proposition cards red and laid them out on a Kanban board, what do you think your organizations board would look like?

Posted in agile, best practices, DevOps, kanban, SAFe, scrum, XP

Explaining Adjustments Costs in DevOps

Coaching a DevOps implementation is a challenging proposition for a number of reasons. One of these reasons is not only getting the business side to buy in, but to keep them on board throughout the entire journey. The truth is that many organizations, vendors and coaches/consultants don’t fully appreciate just how steep the mountain is that must be climbed in order to bring about the culture change required to successfully implementing DevOps.

The implementation of DevOps will need to account for the ‘adjustment’ issues that almost always occur when making any organizational change. By adjustment issues we
mean the loss in performance that could be incurred before DevOps is fully implemented and functioning as intended.

But estimating the nature and extent of any implementation issues is notoriously difficult. This is particularly true because more often than not, the difficulty in large scale change (cultural) is greatly under calculated. When discussing DevOps-related change, we can attempt to use Bruce Chew of Massachusetts Institute of Technology’s ‘Murphy curve’ argument that adjustment ‘costs’ stem from unforeseen mismatches between the new technology’s capabilities and needs and the existing operation. In DevOps this would be unforeseen mismatches between DevOps cultural needs and the organizations existing culture.

Implementing DevOps changes, processes and tools rarely behave as planned and as changes are made their impact ripples throughout the organization. Below is an example of what Chew calls a ‘Murphy curve’ applied to DevOps. It shows a typical pattern of performance reduction as DevOps is introduced. It is recognized that implementation may take some time; therefore allowances are made for the length and cost of a ‘ramp-up’ period.

However, as the operation prepares for the implementation, the distraction causes performance to actually deteriorate. Even after the start of the implementation this downward trend continues and it is only weeks, indeed maybe months, later that the old performance level is reached. The area of the dip indicates the magnitude of the adjustment costs, and therefore the level of vulnerability faced by the organization.

Murphy Curve

Posted in agile, DevOps, Uncategorized

Hidden Constraints/Process Improvements in Value Stream Mapping and Theory of Constraints

One of the starting points with any organization looking to start the DevOps journey or even just looking for process improvement is to create a current and future state Value Stream Map or apply the Theory of Constraints. The concept of Value Stream Mapping has been around for many years (possibly as early as 1918) and is usually associated with Lean. Value Stream Mapping can be used in conjunction with the Theory of Constraints or they may both be applied separately. Value Stream Mapping helps show how both materials and information flow as a product or service through the process value stream, helping teams visualize where improvements might be made in both flows. The Theory of Constraints is used to concentrate on reducing the throughput time. By optimally exploiting the bottlenecks or constraints, the efficiency of the process as a whole is improved.

There are a couple of issues practitioners should be aware of when applying Value Stream Mapping/Theory of Constraints to Software Development. First, Value Stream Mapping and the Theory of Constraints have traditionally been applied to manufacturing where outputs are tangible and easier to measure. Second, in manufacturing we would look out for “Hidden Constraints” which is a constraint in the business process which turns out to be based on wrong assumptions. An example of such a ‘hidden’ bottleneck could be introducing replenish-to-consume everywhere in the supply chain.

When creating a Value Stream Map in Software development or applying the Theory of Constraints we may have a Hidden Constraint/process improvement that we are not fully seeing. When using the Theory of Constraints, we will probably identify “testing” in our process and then identify it as one of our constraints and with Value Stream Mapping we will identify testing as a process improvement. We may identify other areas in the process as bigger constraints/areas for process improvement then testing, this may be because testing is a like an iceberg. What I mean by this is that testing may be much bigger constraint/improvement area then you are actually seeing. Let’s take for example a traditional company who does manual testing and has say a traditional test phase. If we take a very close look at this test phase and are honest we will probably find that the testers are not doing nearly the depth of testing that is really required. What do I mean by this? Well, check and see if they are doing consistency testing across all the browser/os combinations you support, are they doing security penetration testing, do they do front end performance testing or are they doing testing beyond simple happy path testing, do they do web/mobile functional testing across all browser/os combinations supported. (We won’t even get into Unit testing)

This is not the fault of testers, it’s just the reality of attempting to cover testing in a manual way with limited resources and time. So, when you do your Value Stream Mapping or attempt to apply the Theory of Constraints, dig under the covers a little and keep an eye out for hidden constraints/process improvements.

Posted in agile, best practices, DevOps, kanban, SAFe, scrum, testing, XP

QA, DevOps is an Opportunity not a Threat

If you look around most QA message boards you still see the topic of automated testing versus manual testing being debated over and over. I am not going to get into this debate because I believe the overwhelming evidence, industry direction and true nature of what quality assurance is supposed to be dictates that we should be moving to automated testing.

What I am going to discuss is my belief that DevOps actually presents a great opportunity to those in QA who can see the underlying foundation of what DevOps is. DevOps borrows from the quality principles of Deming, TQM and Lean, with the emphasis being on preventing issues instead of detecting them. In other words, doing quality assurance instead of quality control.

So how is this an opportunity for QA, because research is showing that the majority of successful DevOps implementations have started from “within” not as top down directives. This means there is an opportunity for QA members to get involved in introducing DevOps to the organization from the beginning.

So how could someone in QA go about helping introduce DevOps to the organization? Here are a few ideas. First, we need to remember that a key to DevOps is breaking down the traditional silos (QA, Dev, Ops, etc…) and moving to cross functional teams, this means its ok to look for like minded individuals in Dev, QA, BA’s and Ops who are also interested in introducing the benefits of DevOps.

Trying to introduce methods such as Specification by Example (BDD/ATDD) to your agile team is one place you can start. Work with your team to see if you can get them to try using this for the acceptance criteria, remember we don’t necessarily need to introduce a BDD/ATDD tool at first to do this. Once your team gets this in place and starts to see the benefits, try introducing it to other agile teams and evangelizing the benefits your team has discovered using this method.

Another area where you can get involved is automated testing. Here is where it can come in handy to reach out to like minded developers or operations folks. Even if you are not technically comfortable with automated testing yet, these new friends can probably help you get setup and started. If you do some simple research you will find that a number of folks in the open source automated testing community have put together tool sets that can be downloaded and allow you to get started automating right away. Remember a few things here, first you don’t need to automate everything right away and the tool set you choose isn’t necessarily the tool set you will put into place when you try to scale DevOps. A great place to show the organization the benefits of automated testing is creating a simple set of smoke tests.

The last opportunity I will discuss is educating others. Since DevOps is based on the principles, processes and tools of Deming, TQM and Lean and Quality Assurance is also based on these same underlying principles it is a natural for QA to help educate others. The truth is that the vast majority of organizations who attempt to move to DevOps are going to fail because they simply believe that it is about adopting tools, be one of the innovators who can educate the organization on the required cultural and process changes that are truly required to be successful in adopting DevOps.

Posted in agile, DevOps, testing, Uncategorized

Make Sure Your DevOps Teams are Focused on Double Loop Learning Versus Single Loop

Understand that improvement is learning

It should not be a surprise that DevOps improvement implies some kind of intervention or change to the process, and the change will be evaluated in terms of whatever improvement occurs. The evaluation that takes place adds to the knowledge of how the process really works, which in turn increases the chances that future interventions will also result in improvement. What is critical to remember is that it is a learning process, and it is crucial that improvement is arranged so that it encourages, facilitates and exploits the learning that occurs during improvement. We must as a result recognize that there is a distinction between single- and double-loop learning.

Single and double loop learning

Single Loop learning occurs when there is a repetitive and predictable link between cause and effect. Quality Assurance, for example, measures output characteristics from the development process, such as defects, adherence to requirements/acceptance criteria, etc. These can then be used to alter input conditions, such as user story/acceptance criteria quality, standards compliance, developer skill, with the intention of improving the output. Every time a development error or problem is detected, it is corrected or solved, and more is learned about the process. However, this happens without questioning or altering the underlying values and objectives of the process, which may, over time, create an unquestioning inertia that prevents it adapting to a changing environment.

Double Loop learning, on the other hand, questions the bottom line objectives, service or even the underlying culture of the process. This type of learning implies an ability to challenge existing process expectations in a fundamental way. It seeks to re-frame competitive assumptions and remain open to any changes in the competitive environment. But being receptive to new opportunities sometimes requires abandoning existing process routines which may be difficult to achieve in practice, especially as many processes reward experience and past achievement, instead of potential at both the individual and a group level.

single and double loop learning

Posted in best practices, DevOps, Uncategorized

Get a Good ‘Work-Out’ in DevOps

The idea of including everyone in the process of improvement is one of the key principles of TQM, Lean and now DevOps. There are numerous ways being suggested in DevOps articles, blogs and books to involve everyone and create a continuous improvement culture. I suggest that instead of trying to re-invent the wheel, many of these organizations should consider tried and tested approaches, such as the ‘Work-Out’ approach that originated at General Electric. This approach allegedly was developed by Jack Welch, the then CEO of GE. The reasoning behind developing this approach was the recognition that employees are an important source of new and creative ideas, and an instrument for creating an environment that pushes towards a relentless, endless companywide search for better ways to do everything.

The Work-Out program was seen as a way to reduce the bureaucracy typically associated with improvement and give every employee, from managers to factory workers, an opportunity to influence and improve GE’s day-to-day operations. According to Jack Welch, Work-Out was meant to help people stop wrestling with the boundaries and idiocies that grow in large organizations. Most of us are familiar with those idiocies, too many approvals, duplication, shadow responsibilities, politics and waste.

Work-Out is credited with turning GE upside down, so that the workers told the bosses what to do. That has forever changed the way people behave at the company. Work-Out is also designed to reduce, and ultimately eliminate all of the wasted hours and energy that organizations like GE typically expend in performing day-to-day operations.

Work-Out typically has a broad series of activities implied within the approach:

● Staff, key stakeholders and the responsible manager hold an off-site meeting away from the operation.
● During the meeting, the manager gives the group the responsibility to solve a problem or set of problems shared by the group but which are ultimately the manager’s responsibility.
● After the manager leaves, the group spends time (possibly multiple days) working on developing solutions to the problems, sometimes using outside facilitators.
● At the end of the meeting, the responsible manager (and sometimes the manager’s boss) rejoins the group to be presented with the recommendations.
● The manager can respond in three ways to each recommendation; ‘yes’, ‘no’ or ‘I have to consider it more’. If it is the last response the manager must clarify what further issues must be considered and how and when the decision will be made.

While Work-Out programs can add expenses; outside facilitators, off-site facilities and the payroll costs of a sizeable group of people meeting away from work, even without considering the potential disruption to everyday activities. These expenses need to weighed against the most important implications of adopting Work-Out, cultural change (These benefits can be seen in approaches such as Target’s Dojo’s). In its purest form Work-Out reinforces an underlying culture of fast problem-solving. It also relies on full and near universal employee involvement and empowerment together with direct dialogue between managers and their subordinates.

What distinguishes the Work-Out approach from the many other types of group-based problem-solving is fast decision-making and the idea that managers must respond immediately and decisively to team suggestions. An additional way that approaches such as Work-Out influence culture, is the acknowledgment at GE that resistance to the process or outcome is not tolerated and that obstructing the efforts of the Work-Out process is a career-limiting move.

Posted in agile, best practices, DevOps, Uncategorized

Management’s role in DevOps

Few DevOps improvement initiatives which organizations attempt to scale, often with high expectations, will go on to fulfill their potential of having a major impact on performance. The truth be told most of these attempts will fail, resulting in the companies implementing them becoming disillusioned with the results. Yet, although there are many examples of DevOps efforts that have failed, there are also examples of successful DevOps implementations. So why do some these DevOps improvement efforts disappoint? Some common reasons include an organizational culture that discourages any change for example. But there are also some tangible management causes of DevOps initiative failures.

Top-management support
The importance of top-management support goes far beyond the allocation of resources to scale the DevOps initiative; it sets the priorities for the whole organization. If the organization’s senior managers do not understand and show commitment to the DevOps initiative, it is only understandable that others will ask why they should do so. Usually this is taken to mean that top management must:

1. Understand and believe in the benefits of the DevOps improvement approach
2. Communicate the importance of these principles and techniques
3. Participate in the improvement process
4. Maintain a clear ‘strategic strategy’.

This last point is particularly important. Without thinking through the overall purpose and long-term goals it is difficult for any organization to know where it is going. A strategy is necessary to provide the goals and guidelines which help to keep the DevOps efforts in line with overall strategic aims. Specifically, the strategy
should have something to say about the competitive priorities of the organization, the roles and responsibilities of all parts of the organization, the resources available, and the overall philosophy (Continuous Improvement culture).

Senior managers may not fully understand the DevOps improvement approach
It is not difficult to find past examples of where senior management have used one or more improvement approaches without fully understanding them. The details of Six Sigma, lean, ITIL or agile for example, are not simply technology issues. They are fundamental to how appropriate the approach could be in different contexts. Not every approach fits every set of circumstances. So understanding in detail what each approach means must be the first step in deciding whether it is appropriate. Today many top managers are enthralled with the idea of incorporating some DevOps practices in their organization, but the reality is DevOps is not a menu where you can pick and choose what you like and don’t like.

Help the DevOps initiative keep their eye on the end goal
DevOps has, to some extent, become a fashion show with new ideas, tools, processes and concepts continually being introduced as offering a cutting edge way to implement DevOps. There is nothing naturally wrong with this. It tends to stimulate and refresh, through the introduction of these cutting edge ideas. Without them, things would stagnate. The problem lies not with new DevOps ideas, tools, processes or concepts but rather with some teams becoming victims of these ideas, tools, processes and concepts, where some new idea will entirely displace whatever went before. Most new ideas have something to say, but jumping from one idea, tool to another will not only generate a backlash against any new idea, but also destroy the ability to accumulate the experience that comes from experimenting with each one (fail fast and learn from it). Avoiding becoming a fashion show victim is not easy. It requires that those directing the strategy process to take responsibility for a number of issues.

1. They must take responsibility for DevOps as an ongoing activity, rather than becoming champions for only one specific DevOps idea, tool, process or concept.

2. They must take responsibility for understanding the underlying ideas behind DevOps. DevOps is not a recipe or painting by the numbers. Unless one understands why DevOps improvement ideas are supposed to work, it is difficult to understand how they can be made to work properly.

3. They must take responsibility for understanding the ancestor to a new DevOps Idea, tool, process or concept, because it helps to understand it better and to judge how appropriate it may be.

4. They must be prepared to adapt new ideas so that they make sense within the context of their own DevOps initiative. One size rarely fits all.

5. They must take responsibility for the (often significant) education and learning effort that will be needed if DevOps is to be successfully implemented and exploited.

6. They should avoid the over-exaggeration and hype that many new ideas attract. Although it is sometimes tempting to exploit the motivational pull of new ideas through evangelism, carefully thought-out plans will always be superior in the long run, and will help avoid the inevitable backlash that follows over-hyping a single approach.

Posted in DevOps

Incorporating Andon Cord in DevOps

Andon Cord is a Lean manufacturing principle and tool used to notify management, maintenance, and other workers of a quality or process problem. The concept revolves around a device incorporating signal lights to indicate which assembly line workstation has a problem. Normally alerts are activated manually by a worker using a pull cord (Andon cord) or button, or may be activated automatically by the production equipment itself. The system may include a means to stop production so the issue can be corrected.

The Andon System was developed as one of the principal elements of the Jikoda quality method pioneered by Toyota as part of the Toyota Production System (TPS) and has become part of the lean manufacturing approach. Andon cord gives the worker the ability, and moreover the empowerment, to stop production when a defect is found, and immediately call for assistance.

There are stories about individuals from companies who have toured Toyota’s facilities as part of the Kaizen Institute tours who are shocked when they are introduced to Toyota’s Andon Cord concept. They are usually in shock that someone on the factory floor can pull a cord and possibly shut down the entire factory. The question is always how much does this cost and why would you allow an individual to have this kind of power. The answer from Toyota is always the same, it costs about $1 million dollars to shut down the entire assembly line, but if they allowed the line to continue it could cost them millions of dollars down the line.

What is really important in understanding Andon Cord from Toyota is the underlying reason they created it, In Japanese culture, the importance of harmony is critically important. Because of the overwhelming need for harmony, people often won’t naturally speak up. They might be more willing to cover up a problem than to really fix it. So, the Andon Cord is a mechanism that makes it easier for people to speak up. Because it “doesn’t come naturally,” Toyota needed a system to make it possible.

So how does this apply directly to DevOps and software development? The reality is that if we look closely at most software development projects, we have all probably seen the following. During the development of the project individuals openly talk about all of the bugs in the application, that it isn’t what the customer really wants and its being rushed and the technical debt is piling up. Even though everyone involved is openly talking about the issues, the project just keeps moving forward without dealing with any of these issues.

This is where DevOps comes in and can incorporate the Andon Cord concept. There are two specific areas that come to mind, first if an organization has truly flipped the testing pyramid and put full automated testing in place in conjunction with Continuous Integration, Specification by Example and a tool such as SonarQube, this can be the first place the Andon Cord concept can be employed. By forcing all new code to run the gauntlet of full automated testing (driven by Specification by Example) and SonarQube’s quality gates you are making sure it meets the expected behaviors specified by the customer and the code quality is in line with expected standards.

The second area DevOps can function as a sort of Andon Cord is with A/B Testing. When an organization has put into place a fully automated delivery pipeline they are able to quickly get code out to subsets of their customers (often the same day the code was developed) in order to create Feedback Loops that enforce that they are building what their clients wants.

There are other ways that Andon Cord is used with DevOps and companies such as Amazon and Netflix have truly incorporated Andon Cord into their culture. For great read about this and Andon Cord in more depth please take a look at John Willis’s blog “The Andon Cord”.

Posted in agile, DevOps

DevOps and the Theory of Constraints

As the DevOps movement has started to take hold in the software development industry, one of the great benefits of this is that individuals are starting to become aware of tried and true Lean Manufacturing and TQM practices. Having received an undergrad degree in Operations and having worked for the first three years of my career for a Lean Manufacturing shop, I have always been sort of puzzled how software development never seemed to have any industry wide concrete foundational practices that companies followed in order to improve quality and reduce cycle time.

A while back I read a DevOps article where someone was talking about adopting Lean Manufacturing concepts for software development and they specifically mentioned one of the foundational concepts all Lean Manufacturers focus on in one form or the other, and that is the “Theory of Constraints” introduced by Eliyahu M. Goldratt. The Theory of Constraints is basically a methodology for identifying the most important limiting factor (i.e. constraint) that stands in the way of achieving a goal and then systematically improving that constraint until it is no longer the limiting factor. In manufacturing, the constraint is often referred to as a bottleneck.

Below is a simple example that illustrates how the Theory of Constraints works.

Theory of Constraints

The constraint, or bottleneck, in the system is determined by the step that has the smallest capacity, in this case surgery. The total number of patients processed through the entire system cannot exceed 15 per day, the maximum number of patients that can be treated in surgery.

So how then does the Theory of Constraints apply to DevOps and specifically the software industry? Using the example above I encourage Agile practitioners to understand that simply adopting Agile practices and expanding capacity upstream in most cases will not increase the overall capacity of the system unless you identify and increase capacity in the bottlenecks identified through the Theory of Constraints and its corresponding tools. Just a side note, the Theory of Constraints uses a number of tools that are used in DevOps, such as Value Stream Mapping, Gemba, Kaizen and Kanban.

Posted in agile, best practices, continuous integration, DevOps, kanban, Uncategorized

DevOps, getting that old Deja vu feeling

While recently putting together an Intro to DevOps presentation for an upcoming meetup, I couldn’t get over this feeling of déjà vu as I put the presentation together. As I reviewed the presentation over and over I kept trying to figure where this feeling of déjà vu was coming from. Finally while looking at a section I had included showing a recent study of DevOps adoption in the US and worldwide, I realized where this feeling of déjà vu was coming from. The study stated that 79% of companies in the US and 66% were going to start adopting DevOps practices in 2015, the part of “adopting DevOps practices” was the piece that was causing these feelings. In order to explain this, we need to digress all the way back to the Waterfall methodology in order to understand this feeling of déjà vu.

Waterfall

When looking at the Waterfall methodology a critical problem standouts when looking at the model above. The next image shows an underlying problem with the Waterfall methodology.

silos

When we look at Waterfall we see clear and distinct boundaries between departments that have their own silos. Inside these silos we see departments that own their own tools and processes. The creation of silos results in a lack of collaboration, nonstandard tools and disjointed processes. Agile has attempted to bridge these boundaries by breaking down some of the silos as shown below.

agile

If done correctly, Agile can be very successful in breaking down these silos and making organizations more productive and nimble, but the truth is that very few organizations have done Agile correctly. If we are honest, the image below shows what has actually happened at the majority of organizations who have adopted Agile.

agile

The truth is that most organizations who have adopted the Agile methodology have quote “adopted agile practices”, in truth they have cherry picked the pieces that are fairly easy to implement and ignored the difficult changes. The reality is that the agile teams that have been created are still encased in their silos. If you look closely at the agile teams, they still have the same reporting structure in place, use their own tools and rely on their traditional processes. Another issue also rears its head as the Agile process moves forward, the new code being developed by these teams is not getting released to production any quicker. This has unfortunately led to many parts of the organization and management to look at Operations as a bottleneck, which in turn has led to the attempts below to fix the bottleneck as shown below.

agile

A couple of common fixes I have seen agile practitioners and organizations attempt to fix the perceived bottleneck at Operations is to either include an Operations member in the Agile teams or implement Kanban in Operations. These attempts are doomed to fail because they ignore the underlying issues such as Operations and everyone on the Development side having misaligned incentives as shown below.

agile

Getting back to the reason for these feelings of Déjà vu, I am hearing the same comments from organizations attempting to implement DevOps that I heard years ago with organizations attempting to implement Agile. They all have the same spiel, we are going to start adopting “practices”. In other words we are going to incorporate the pieces of Agile or DevOps that don’t cause any disruption or simply stated “aren’t difficult”, such as making cultural changes.

I equate it to taking management to a buffet restaurant, they get their plates and quickly head to the roast beef carving station, then maybe get some mac and cheese or mashed potatoes and gravy and they definitely have to get some desert, but when they get to the vegetable/fruit station they turn their noses up and go eat. Well, we all know what happens when we don’t eat any vegetables or fruits. We become sluggish and unhealthy in the long run, just like most of these organizations who are trying to cherry pick Agile and DevOps practices.

Posted in agile, best practices, continuous integration, testing

Download and extend ATF...

ATF Is Now Open Source

Join this 10 week program anytime...

DevOps Mastery Program

Get your DevOps health check now...

Free DevOps Assessment

Sign Up for the ATF Newsletter


First Name
Last Name
Email address
Secure and Spam free...
X