Refining Federal Technology Spending Figures

Posted by Kyra Fussell on November 11, 2015

Let’s say you’re looking to size the federal market for information security products and services. In particular, you’d like to pinpoint how much agencies have spent in recent years. Perhaps you have a specific federal customer in mind, or particular type of security solution. Repositories like the Federal Procurement Data System (FPDS) offer broad access to the publicly available portion of federal spending. Over time, the quantity and quality of the released data is expected to improve with the standardization resulting from implementation of the Digital Accountability and Transparency Act (DATA Act). Within this sea of spending information, there are a several elements that can be used to parse the data including Product or Service Codes (PSCs), the North American Industry Classification System (NAICS), and contract descriptions. Each piece offers a different sort of utility and limitation.

As we’ve discussed previously, analyzing data along a single aspect might be the shortest path to figure for market size, but it does little to hone in on targeted markets and align reported figures with most definitions for technology verticals. Sure, you could just analyze spending according to PSC, but the results would not account for buying that spans multiple categories. It would rely on single, primary requirements or risk duplicative counting of investments. It would also provide little granularity and might not line up with your organization’s market taxonomy. If you incorporate more than one variable to sift out the relevant portions, the elements will need to hang together to provide a balanced approach. For information technology spending, this can be a particularly difficult line to walk. Technology spending is embedded (to varying degrees) within numerous product and service categories.

For recent analysis of reported IT spending, GovWin’s Federal Industry Analysis team completed a search of over 2.2 million FPDS transactions for the period from FY 2012 to FY 2014 across eight product and services categories. These eight categories were selected due to notable, embedded technology spending: Aerospace and Defense, Electric/Electronic Components and Parts, Information Technology, Operations and Maintenance, Other Products and Services, Purchase/Lease of Facilities and Equipment, Professional Services, and Research and Development. It’s remarkable that information technology spending can be traced to so many different categories. This fact speaks to the challenge of isolating technology dollars. If the varying level of technology spending across and within each category is not taken into account, a disproportionate amount of adjacent and unrelated funds can skew the assessment of technology buying. For information security, this can be especially tricky within the Department of Defense, where requirements often reflect elements of the national security mission. Considering this aspect of how contracting is reported, including the cost of entire platforms will yield inflated spending figures when the information security portion cannot be separated. For instance, information technology spending makes up a portion of defense investments in unmanned systems, and some part of the IT piece is likely directed towards security. Does it make sense to count the cost for the whole system – nose to tail and all the circuits, sensors, nuts and bolts in between – in the tally for information security spending? It’s unlikely that it does. Following the logic that the entire cost of an unmanned system qualifies would lead to counting all DoD IT spending as information security spending. So how do you focus on the transactions with a significant portion of market-relevant spending?

There is a lot of room to navigate in determining the threshold for what degree of funding needs to be relevant to a target market. In most cases, even a “clean” dataset is likely to contain some adjacent spending or a small degree of transactions not wholly and purely related to a given market. Even if PSCs are taken in combination with NAICS and selected keywords used in contract descriptions, the total sum per transaction does not specify what portion is directed toward different components or technologies. Thus, there will be many judgment calls in setting parameters and refining collected data. This is further complicated by the wide range of contexts for terms that are used in contract descriptions, as well as the limitation introduced from relying on terms included in transaction entries. For example, the use of keywords like “security” or “FISMA” across contract transactions within the information technology PSCs will also capture physical security, guard services, and IT systems that support personnel security. Careful selection of relevant keywords and review for other contexts in which terms are used is an important step that should happen early in the sizing process (and often as sizing exercises are repeated). Scrubbing the data to exclude erroneous or misleading transactions can be an extensive part of the process.

Ensuring that appropriate criteria has been set facilitates a clearer picture of spending while framing expectations for what the data can indicate about buying patterns. Considering the complexity of the process and the number of elements in play, it makes sense to approach data with skepticism if little information is provided about methodology. Without that insight, it’s difficult to understand how figures were attained and the limitations of those results, which in turn limits the utility in decision-making.

Curious about historic spending for IT markets like cybersecurity, big data, mobility, and cloud computing? The Federal Industry Analysis team recently published new reports that include this analysis as well as forecasts of contractor addressable spending.

Originally published for Federal Industry Analysis: Analysts Perspectives Blog. Stay ahead of the competition by discovering more about GovWinIQ. Follow me on twitter @FIAGovWin.