New Cyber Security threats and European legislation are a great way for ITAM to revisit links with the security team and strengthen the importance of ITAM within the company.
Wannacry and NotPetya outbreaks hit 150 countries in one weekend.
GDPR is on the horizon for European companies (Or all global companies with European presence).
Accurate inventory, trustworthy data and complete asset records have never been more important.
I asked two experts in the field – What advice would you give to IT Asset Managers and Software Asset Managers to empower their security colleagues in dealing with these issues?
GDPR on the horizon:
More rigorous data protection legislation is due in 2018. How can asset managers help?
“With changes to the data protection regulations scheduled to go into effect May 25, 2018, companies need to start assessing how this will impact their environments.
An effective IT asset management program, will also focus on hardware lifecycle management, not just software license entitlement. If a device contains data that is governed by GDPR, the asset can be tagged and managed in accordance with the new internal processes that support GDPR. Similarly, a server or mobile can be tagged. The history can also follow when the device is backed up and when it is marked for decommissioning. If the device reaches the end of its lifecycle all of the appropriate processes will be followed to ensure that there is no violation of GDPR requirements.
Having an effective employee off-boarding process is going to be critical to ensure that all of the assets that are covered under GDPR are recovered from employees when they leave or are terminated from the organization. This will require tighter tracking of assets assigned to employees to ensure that all are recovered upon termination. This will necessitate the recovery process for remote employees to be more rigorous than it previously might have been. Furthermore, this is an opportune time to review all of the ITAM processes, to ensure that they are still relevant and reflect current state in the environment.”
Dave Bowser, Raynet:
“Solid accurate inventory data (machine data) can provide early warnings to unauthorized logon activity for users on vacation or VPN attempts and such. Once possible breaches have been identified via inventory and dependency data measures can be quickly established to pro-actively secure those possible breached components.”
WannaCry / NotPetya:
Worldwide outbreaks on un-patched systems, how can we support our friends in Security with this risk?
Dave Bowser: “Such cyber-attacks are imminent and can only be addressed for known infrastructure components and their dependencies. Without having a reliable automated real-time as possible solution in place you will only be chasing and hoping to have secured your assets.”
Patricia Adams: “Wannacry proved to be a wake up to organizations that had legacy applications that were not being properly patched or maintained. The need to build a custom solution caused companies to freeze their current environment at a specific point in time. This freezing leads to the concept of “technical debt.” Technical debt is often used in the context of custom developed applications that are built in response to a need that an organization has and there isn’t a software product available on the market that will solve that problem. However, technical debt can also be accrued on the operational side. The ubiquity of technical debt came to light with the Wannacry ransomware that was estimated to have infected over 200,000 PC’s worldwide. It’s been speculated that the worm that was used was one that was discovered by the NSA. It has been mentioned in news reports that when the NSA was hacked in the early part of 2017, this info was made available on the dark web and hackers conveniently exploited the vulnerabilities. Even though support was ended years ago, to stem the damage and proliferation, Microsoft released two emergency patches to protect those devices that were running XP, windows 7, Windows 8.1 etc. Basically, those operating systems that had been end of life long and were no longer supported. Due to the expense of migrating and in some cases complexity, some organizations chose to stay on the old OS. Also, because some business critical applications could only run on those older OS, organizations kept them installed and running longer than Microsoft would have liked them to have them installed.
Not all technical debt is bad. Organizatons will in some cases have to keep older hardware and software because they are required to by government regulations. They must be able to access and recover data within specified timeframes In order to meet data requests. I wouldn’t really consider this to be technical debt because it is meeting regulatory requirements. Most organizations recognize and willingly accept the risk of having unpatched systems running so they will attempt to tackle this by segmenting their network architecture and subnets to isolate a limited number of devices that they need to protect. Turning to support from third party companies that will continue to patch those end of life operating systems is another way to reduce risk, but this decision often leads to an unbudgeted expense. When a new technology purchase occurs, planning for end of life needs to be factored into the cost, though it rarely is.”
Raising the profile of accurate inventory
How do we raise the profile of accurate inventory and ensure our records are accurate?
Patricia Adams: “While organizations will often have multiple discovery and inventory tools, they aren’t all going to be fit for purpose for IT asset management. In a survey from 2016, conducted by Dennis Drogseth, a Vice-president with Enterprise Management Associates, it showed that 49% of the organizations, who participated in the survey, had 11 or more discovery tools. I’m aware of over 64 vendors, some of them small, niche players, that do some form of discovery and inventory so it’s no surprise that an organization might have that many solutions with overlapping functionality that might have been acquired by the different IT domains. When there are so many data sources, you want to ensure that the source of truth is a reliable one and is collecting the data elements needed to be effective. Normalizing and reconciling the data sources of across at least 3, but preferably more, data elements can ensure that the info is being correlated accurately. Making decisions on bad data or only 80% accurate can be costly. Security teams often have their own discovery tools but those tools aren’t scanning for the same data that ITAM/SAM tools are looking for. Collaborating and sharing info about potential risk areas with Security can be a major step towards minimizing it.”
Dave Bowser : “Needless to say, worse than no data is false or outdated data so without reliability and accuracy early warnings can be missed or falsely interpreted. Inventory data can only be of the best quality when any all possible sources are being tapped and consolidated to allow for cross checking. Without an appropriate checks and balances how can accuracy really be measured? That is why it is absolutely imperative to have a complete and accurate D&I.”
Complete Asset Records to support the business
Finally, how do we ensure we have complete assets records to support IT Security?
Patricia Adams: “Having a carefully constructed view of an organizations IT assets, should begin when the asset is first listed in the asset catalog and then tracked, monitored and managed over its lifecycle. The usable life might be anywhere from 18 months to 10 years, depending upon the asset type, so it will require diligence for the complete timeframe it is in use. The asset record, regardless of whether it is hardware or software, should include financial data and contract information, and this data should be compared against what is in use and deployed or what is available for deployment. There are approximately 10 foundational data attributes that will be similar across all asset types and these should be the primary fields in the asset record that identify the asset. Vendor, product, bar code, serial #, SKU, purchase order #, user, location, contract, etc., that will be used to manage an asset. With this info, an organization can store all of the data attributes about that asset that it needs to have a holistic view. As the asset undergoes change or has problems associated with it, all of this can be tracked back to the contract to understand if the vendor or the asset is meeting its contractual obligations against warranty or service levels.
With a complete asset record that is integrated with ITSM, will make it possible to do effective vendor and supplier management. Being able to associate incidents with vendors and the underlying contracts, can report issues or challenges before they become major ones. Understanding how the vendor and supply chain are performing, provides visibility that is needed to do portfolio management and can support efforts to manage technical debt. Put this all together and an organization can build IT asset management into a shared service that supports more than just audits, and can reduce governance risk and compliance GRC problems that get senior executives attention.”
About Martin Thompson
Martin is also author of the book "Practical ITAM - The essential guide for IT Asset Managers", a book that describes how to get started and make a difference in the field of IT Asset Management.
On a voluntary basis Martin is a contributor to ISO WG21 which develops the ITAM International Standard ISO/IEC 19770.
Learn more about him here and connect with him on Twitter or LinkedIn.