Security Professionals Need Enablement

Security Professionals Need Enablement

After reading Corey Nachreiner’s article on Dark Reading titled “The Perfect InfoSec Mindset: Paranoia + Skepticism”, I agree with him on strong security professionals needing some elements of paranoia and skepticism. Where I found myself disagreeing with Corey Nachreiner is his claims that those two elements make the “perfect” security professional mindset. What is missing, in my opinion, is a large dose of “enablement”.

His article’s subtitle, below, provides a succinct explanation of his claims:

“A little skeptical paranoia will ensure that you have the impulse to react quickly to new threats while retaining the logic to separate fact from fiction.”

The impulse to react quickly is indeed a critical component of a successful security pro in today’s zero-day, near real-time threat to exploit to compromise environment. The lackadaisical mindset of “oh, I just got an alert that 10 machines have been infected with malware … I’ll look into that next week” is not acceptable. Where would Target be if their security team had reacted quickly to the alert that they had a possible malware infection?

Being skeptical of the deluge of mis-information spewing from vendors, media and security pundits is indeed, another critical element of a successful security professional. Without separating fact from fiction, a security pro’s effectiveness and credibility will suffer greatly. Running around telling everyone to throw away their mobile phones, everything needs to be encrypted on Linux on a PC air gapped from the Internet or “don’t put anything in the cloud it isn’t secure” is not going to help a security pro’s career prospects.

But just stopping here, as Corey Nachreiner says in his supposition below, in my opinion, stops short of rounding out a “perfect” security pro mindset as he has:

“My latest superfluous and random supposition is that a dash of paranoia paired with a side of skepticism makes for the perfect security pro mindset.”

The additional, critical element of “enablement” is what makes the “perfect” security pro mindset in my opinion.

“Enablement” means not stopping at the skeptical paranoia of “don’t put anything in the cloud it isn’t secure”. It means understanding what is the “need” or the “ask” behind the “put it in the cloud”, educate on the risks and threats without sounding hysterical (keep paranoia and skepticism in check) and suggest solutions that address the need. Changing the conversation from “cloud” to “what is the problem you are trying to solve that has you suggesting ‘cloud’” changes the perception that “security always says no” to “hey, security is trying to partner with me to solve my problems”. Enablement, executed effectively, returns dividends to the security pro in the form of greater collaboration. “The department of no” is typically only engaged unless an IT governance process mandates a security sign-off or when the solution being tossed around is radical enough that some “CYA” is needed to avoid political fallout if something security-ish would go wrong later. Enbablement sets the stage for “Hey, this might be a possible solution. Let’s engage our security pro to see how he/she can help us figure out how to secure it.” Enablement acts as an effective governor to keep the paranoia and skepticism in check to enable business to get done, workers to work and customers provided products and services.

Enablement drives collaboration which drives more effective security for the enterprise.

So, in closing, it is not that Corey Nachreiner is explicitly wrong, just he stops short, in my opinion, of completely rounding out the “perfect” security pro by not adding the element of “enablement” with the skepticism and paranoia.

, ,

I recently had the unique opportunity to chat one on one with a highly experienced CIO with a history of managing IT large shops. It was a fascinating conversation and eventually it wandered to the evolution of corporate IT in larger companies. At one point in the discussion, he started drawing on a whiteboard as he was describing his perspective. The whiteboard picture was a great way to visualize the IT transformations over the decades. In a break from my series on helping engineers get their great ideas to resonate with senior management, I’ve captured that discussion here including the visuals and added some additional perspectives as that conversation has stuck with me over the subsequent weeks.

Through the 80′s = Specialization

This graphic reflects various company business units with their IT function and staffing reporting directly into the business unit itself. As an example, the Human Resources business unit has their own “HR IT” department that is a completely self sufficient group of individuals with all of the tools and ability to support all the needs of the HR business unit. From basic helpdesk/PC support through server/application support and new application development needs, “HR IT” can and does do it all.

90′s to 00′s = Centralization

As depicted in this graphic, this decade of corporate computing reflects a great centralization effort across the industry. Company leadership noticed they were paying people in “HR IT” to say, fix PCs, as well as additional people with the same skill-set in “Finance IT” to fix PCs. Also, it was becoming more obvious that fixing PCs wasn’t a unique skill-set, rather, the same skill-set that could be sourced from a central team. Pooling the skill-set enabled an aggregate reduction in head count. If all department IT PC fixers totaled, say, 10, then a central PC fixing team could deliver the same level of service with, say, 7 people. Thus, cost savings became a big driver for this large scale organizational change over this decade for pretty much all IT functions.

The change wasn’t without pain. Business unit individuals were used to having an IT need and literally walk down the hall to “Bob the IT guy” and get immediate service. Once Bob became part of a central team, one could no longer just reach out to Bob and get that immediate high touch service. One had to call a “helpdesk” and report the need. Then the “helpdesk” assigned a request “ticket” number to the need. Next, the requester was informed of the “SLA” (service level agreement) to which their need would be serviced. Instead of bringing your broken mouse to Bob to have him pull a replacement from a cabinet and hand it to you immediately, you were told a new mouse replacement was on its way and should arrive within 24 hours or something clearly not as immediate.

This began a huge cultural shift in the business/IT relationship. In times prior, Bob was viewed as a trusted member of the business unit who knew everyone, knew the business, knew the priorities, and even knew the applications sometimes more thoroughly than the business users themselves. Bob knew that Sally got really frustrated with technology and needed extra time and attention. Bob knew that Jim had zero patience and would flip out if his needs weren’t met immediately. As IT became ever more centralized through out the decade, all of Bob’s high touch, high value service was getting replaced with narrowly defined IT “services”. Business users began experiencing “high call volume” delays and “We no longer provide that service” and “We can provide that service. Can we get your manager to sign off on the $X charge back?” IT became increasingly viewed as just a service provider.

I am going to date myself as I began my IT career seeing first hand this service provider transformation. I was “Bob” providing high touch, high quality service getting constantly re-organized into a more and more centralized IT functions. I saw the business user’s frustration grow and the quality of service plummet. I also appreciated that it was a necessity. The cost to provide high touch service wouldn’t scale to meet the business demand for IT in any sustainable manner from a balance sheet operating cost perspective.

This rise in IT being viewed as a “commodity” service provider that was increasing disconnected from the business gave way to the other IT trend in the 90′s: outsourcing. Once IT was a commodity, the next logical management thought process was: “Why should all these expensive IT people be employees? Why not shift them to contractors as well as shift a bunch of fixed cost to variable cost on the balance sheet?” IT already had plenty of contractors to supplement permanent staff with specialized skills. Shifting IT management to consulting firms also removes the complexities of aligning staff skill sets to rapidly expanding technology innovations plus reduces the IT HR overhead as well. Thus the 90s saw significant, large scale, IT outsourcing activities.

I personally went through such an outsourcing experience early in my career. I started as an actual department IT employee and then got caught up in the centralization activities followed by a massive outsourcing effort. Over nine years I probably had a new boss and a new reporting structure every six months. The company went from many thousands of IT employees world wide to literally 127 over the course of four to five years. I remember that specific number because it was such a dramatic transition (which is worthy of its own blog article). I also had five employers over those nine years as each wave of outsourcing involved larger and larger IT outsourcing firms

00′s to 10′s = Repair the damage

This decade for IT was equally disruptive as the prior. The wide spread use of the Internet created whole new marketplaces and opportunities for companies to get revenue from new products and services continued to drive up demand for IT. The dot-com crash brought a much needed fiscal correction to the fundamentally flawed business models of some Internet birthed companies. Though unfathomable at the time, in hindsight, a company has to make revenue at some point or the finite capital funding will shift to those that can demonstrate profit making ability. Yet, not everything IT was Internet related. There was still a need to operationally support the non-Internet activities of a company. Business folks were growing ever dissatisfied with IT services. IT, considered a commodity, had established the reputation for being big, slow, expensive, unable to change and unable to deliver on time. CIOs and IT leadership knew they had to fix this great divide of what was seemingly the business versus IT and IT versus the business.

As I’ve depicted in the graphic above, IT, still centralized, begins to repair the “relationship” with the business by adding new roles that bring IT closer to the business. Business executives complained to IT executives that they don’t know how to engage IT for services nor do they know how to escalate issues for effective resolution nor have a mechanism to ensure IT is working on their most important needs. Thus new IT roles like “business relationship managers/executives” are created. These roles involve highly polished communicators that know how to talk business to business people and enough IT to quickly navigate the monstrous IT organization to bridge the gaps. The business versus IT tension begins to reduce. Relationship roles expand to provide more people to address different levels and expectations within the business. Project management expands into program and portfolio management. Relationship managers now present a “book of business” or a “portfolio dashboard” to keep their business partners informed of what IT is working on and delivering for them.

You’ll also notice the size of the business units is shrinking. As IT continues to automate human activities, the business needs less humans. IT continues to grow to meet the demands of the business for automation. The actual total company employee count, in aggregate, starts to show and consistent and continuing declination trend.

For more on this people versus automation theme, I strongly encourage you to read the books and articles from Andrew McAfee at MIT’s Sloan School of Business as he has some phenomenal research and perspectives on this topic.

Towards the end of the decade and prior to the US financial collapse, the typical maturing IT organizational structure had delivery teams aligned to business units as well as teams grouped around common services. Developers might work for the HR aligned delivery team within IT but interact with database and platform engineers that were in a single pool reporting into a separate technology aligned management structure. I’ve tried to depict this with the almost overlapping circles between the business units and IT. IT begins the perception shift from a commodity to a trusted business partner. At the end of the decade, the US financial crisis put a screeching halt to the good progress IT and the business was making as massive staffing cuts were occurring at all levels of companies for survival.

The outsourcing waves of the 90s became less massive as those companies that did extreme outsourcing started pulling back in IT employees to fix the bruised relationship problems. A variety of pure commodity support functions (first level helpdesk, PC, server and network support for example) were still ripe for outsourcing. Another area that was ripe for outsourcing was software development and support. Low cost labor in places like India became attractive to IT management. IT failed to produce any hourly rate to quality model in the application support and development areas. Thus, from a finance perspective, it pretty much impossible to say with numbers why a $50 an hour application resource was that much better or worse than a $75 or $100 and hour resource. Thus, when an IT firm said they could provide those resources, offshore, for $20 an hour, CFOs put pressure on CIOs to take advantage of those drastic cost cutting opportunities. Thus business aligned delivery teams trying to manage employees, specialty contractors, on-shore and off-shore resources and their “book of business” with their business partners. These roles demanded strong talent to address all of these factors impacting success.

10′s – present = More disruption within IT

Although the current decade isn’t even at the halfway point, corporate IT coming out of the US financial crisis has been forced to evolve even more rapidly. IT industry buzz worthy trends such as “consumerization of IT/BYOD (bring your own device)”, “big data”, “cloud” and “cyber-everything/data breaches” have forced businesses and IT to further align and hold each other more accountable for the company’s overall success. IT itself, as I’ve tried to depict in the above graphic, is beginning to split very clearly into the portion aligned to the business and the remaining commodity portion. The business aligned portion of IT is essentially brokering the commodity portion of IT to deliver services to the business. At times, those commodity IT services are being brokered outside of the IT department itself to “cloud” providers. And, in situations where the business and IT delivery alignment isn’t crisply delivering needed value, the business is going around IT and brokering their own “cloud” provided IT services.

Within IT itself new roles are appearing to support the growing chasm between commodity IT and delivery IT. “Integration engineers/architects” represent a function that takes the application services the delivery teams needs to construct and aligns the various internal and external commodity technology components to host the application services. “Application/solution architects” expand to assemble plans to leverage existing IT assets as well as buy or build new assets. It is rare today for a corporate IT shop to embark on a completely green field software development project. So much IT solution options have been built up within companies as well as available via software companies that the majority of software development has morphed into targeted development to support the integration of existing to new purchased software components. “Enterprise architects” are looking across a wide range of project activities and trying to identify common patterns and leverage-able components so projects are less likely to build redundant technology for different business units. “Cloud integration architect/specialists” augment the knowledge of what company IT capabilities exist compared to cloud “x-as-a-service” capabilities. The IT solutions now need to consider beyond just buy or build technology to include sourcing the technology “as a service” from a “cloud” provider ss the best way to meet the business needs.

Another aspect to the business versus IT dynamic is becoming obvious. In prior decades, the business knew the business and just needed to give orders to IT to automate work flows or enhance existing automation. Thus the notion of “IT as order takers” existed where many IT organizations waited around to be told what to do by the business and then would run around and execute what the business was asking. I think, although controversial, it is increasingly more likely that those extremely aligned to the business actually know more about the business than the business. When a company’s products and services become more and more IT-like in form and function, the company’s delivery IT increasingly becomes more knowledgeable than the business because they know the IT side and have been rapidly partnering with the business in order to know the business. One supporting example would be the financial services industry. Sure, there is a significant predominance of bankers in banking, but if you look at the products a bank offers, a strong number are essentially technology in form and function (think ATMs, online and mobile banking, account to account electronic money transfers, online account opening, electronic statements, etc.). Many financial products can be researched, selected, purchased/enrolled, used, serviced and closed/terminated all digitally with a majority if not all bank back office functions technologically automated to the point that no human interaction occurs with any transactions. Hence in the graphic for this time period reflects a significant overlap between IT and the business.

Present and beyond = Continued evolution

What do the years ahead look like for corporate IT and business evolution? Not owning a crystal ball and only listening to the IT pundits, one might think corporate IT is going to disappear and the business is going to go directly to the “cloud” for all their IT needs. One can Google “end of the CIO” and scroll through the pages of articles making such claims.

I do think that that the trend of IT splitting into highly business aligned groups with a commodity/raw technology providing group is going to continue unabated. Additionally, I think there will be increased growth in business process outsourcing which, depicted as the example HR cloud bubble in the above graphic, involves not only using cloud IT services but having that cloud service provider take on more business tasks previously handled by company employees. “Commodity” business functions such as aspects of say, HR and Finance are the potential candidates for this complete business process outsourcing. Functions such as processing payroll, previously handled completely in house can now be a data file out of an HR system (potentially in the cloud itself) and into a new cloud provider to figure out who needs a check or direct deposit or a purchase card, etc. Due to the need for complex IT integrations of these services, passing data electronically between the company and cloud providers for various reconcilement needs, the business will need to partner with their IT delivery teams to effectively utilize these services. Thus, the business and IT partnership will continue rather than the cloud consuming in house IT.

Additionally, I see cloud service providers subbing out portions of their services to other cloud providers. This allows cloud providers to further optimize the cost to deliver services while at the same time increasing the contract and integration complexity for their customers. As far as public/private “hybrid” clouds, the potential for less security pressured industries, say non-enterprise software development companies, may gain “hybrid” traction. My guess is highly secure environments will grind to an almost halt with “hybrid” clouds due to the general security noise around cloud host-ers as well as all the access control, monitoring and reporting that, traditional on premise “identity and access management “ capabilities provide.

All in all, corporate IT is in constant evolution. The rate of evolution appears on all counts to be accelerating with no signs of any slowdown in the future. Anyone who isn’t prepared to have to constantly re-invent themselves should seriously consider a different career path than corporate IT.

Anyone disagree with my evolutionary perspective or future predictions?

, , , , , , , , , ,

Maybe, just maybe, by some miracle, you work in some IT shop where the mere mention of a new technology investment that “just makes sense” translates into executive support to go forth and procure. I would venture to guess you are not or you wouldn’t be out reading blog posts on how to translate your good ideas into execute-able business cases. In the initial articles (here and here) quite a bit of time was spent justifying the need to express your tech invest idea in business or “executive consumable” terms. In the most recent article, the concept of a “business case” as the story told to get those tightly clutching the dimes to hand you some to procure your recommendation. This article outlines an approach that uses this story telling as a way to prioritize the data gathering for a formal business case to support your ultimate investment recommendation.

So, if you’ve read all three articles to date, I am going to quickly try and re-tie the concepts together to reflect how they all relate:

Value Proposition = Your succinct, business digestible “elevator pitch” recommendation

Business Case = The formal (or informal) manifestation of all the data supporting your value proposition

“The Story” = Embellishes your “value proposition” into a narrative, supported by your business case, that in essence, convinces all readers that they would be absolutely crazy not to invest in your recommendation

As much as I would like to say there is a specific order to completing each concept outlined above, I would be fibbing in doing so. Many times I’ve revised a value proposition as I collected business case data points or changed a story to align to a value proposition revision. As a further example, in a past company, I was compiling a recommendation to invest in a centralized employee access management tool (one system to dole out who can have access to what application, etc.) I thought the original value proposition was increased security but as I dug into the data, the stronger value proposition changed to one of operational efficiency (people could spend more time being productive rather than sorting out how to get access to systems). Sure, the original concept of better security didn’t diminish, rather, the story around spending some money to implement a system for overall operational efficiency gains would garner more support in that company’s fiscal focus at that time. Going in with a story around “better security” would have been met with “um, why do we need that?” Having gone in with “spend some money now to get all this savings year over year” was received as “spend some now, save lots year over year, when can we start?”. Thus be open to revising what you initially draft. Remember, your goal is to come up with the most effective value proposition, business case and story to get your recommendation funded.

Thus, start by jotting down initial thoughts around your value proposition and “story” while collecting the associated data points for your business case. Be prepared to adjust each as your investigative journey evolves.

Ok, so you might be thinking: “I get the value proposition thing and this structured, template-y business case-y thing, but how does this story thing really fit in? Can’t the value prop and business case be enough?”

Interesting to note on this line of questioning, it just so happens recently Seth Godin posted an article entitled “Every Slide Tells a Story” with this phase being directly in line with my reasoning:

“Your Powerpoint is not a presentation of data. It is a story, a story designed to change minds.”

Only one employer in my career had a business case template that afforded the ability to really embellish the benefits compared to the costs with some prose rather than just raw numbers. It just so happens that the CIO, who ultimately had to accept/reject proposals prior to finalizing his budget with the CFO, expected to have a narrative document that convinced him of benefits/costs. He made everyone use that business case template to do that for him. I quickly picked up that to be effective, creating a compelling “story” for him to relate what was supported by “the numbers” is what he wanted to take to the CFO. This “story” should be whatever best leverages your skills in augmenting the facts and figures nature of the business case to convince those with the proverbial purse strings to loosen them.

Consider using Microsoft PowerPoint as a tool to present your “story” if the business case template is constraining the narrative

Yes, merely uttering the words “PowerPoint” to the average IT engineer and the immediate response is probably a groan rather than enthusiasm. The phrase “death by PowerPoint” is all too reflective of the average presentation artifact that is slide after slide, bullet point after bullet point of the hemorrhaging of Ariel 14 point font ramblings of the speaker. Most notable in aversion to PowerPoint is Amazon, Inc. that is known for specifically banning the use of PowerPoint for any company business. Also, I recall for which the references escape me, articles indicating that when government officials started to collaborate with automotive companies involved in the 2008-10 US financial crisis bailout, all meetings hosted by the auto companies requiring even the simplest decisions involved carefully constructed and reviewed PowerPoint presentations. This furthers the urban corporate legend that “PowerPoint” is a tool of inefficiency. So yes, PowerPoint as a tool can be counter productive if not used effectively in the same manner as Java or .NET can support elegant, efficient code or bloated, bug infested code that just barely compiles. It is not the tool’s fault for pour quality, rather, the individual using the tool.

Thus, put aside any ill feelings of presentation software such as PowerPoint you might fester. Consider embracing it as a tool to help you expound upon your value proposition and extract the material data points from your business case to tell a story that leaves the executive thinking: “Ok, I buy it, when can I get it?”

The next article will outline story formats to help you compel your audience to agree and invest.

, , , , , , , , , , ,

Maybe, just maybe, by some miracle, you work in some IT shop where the mere mention of a new technology investment that “just makes sense” translates into executive support to go forth and procure. I would venture to guess you are not or you wouldn’t be out reading blog posts on how to translate your good ideas into execute-able business cases. The prior article covered adding some management speak to your vocabulary as well as being able to quickly and succinctly articulate the value of your idea for executive consumption. This article will expand upon the concept of executive consumption with the goal of increasing your chances of getting your great idea funded.

In the prior two articles (here and here) quite a bit of time was spent justifying the need to express your tech invest idea in business or “executive consumable” terms. If you are not convinced by now that re-thinking your approach to getting your company to support your technology purchase idea, I doubt I will be able to do so no matter how I try. This article gets down to the meat of how you need to package your idea in “business case” form. “Business case” as defined by Wikipedia “captures the reasonings for initiating a project or task”. The Wikipedia entry also indicates they can range from informal and brief to comprehensive and highly structured. Your first task before you even start compiling your “business case” content is to determine your company’s expectations around the level of formality in its business case review/approval process.

First step, determine your company’s business case review process

Now is not the time to guess. Ask around to get answers to these critical initial questions. Nothing could be more frustrating than putting a ton of time and energy into compiling and formatting a bunch of data only to find out you have to completely re-format and re-calculate everything to fit a Finance department required template or some other expectation.

Initial questions to get your bearings on your company’s expectations:

  • Is there a standard template used to present business cases?
  • Can I get a copy of some business cases that were recently approved for funding?
  • Is there someone or a process that reviews business cases for completeness that can provide feedback?
  • Can I speak with someone who has successfully compiled business cases that got approved in the recent past?
  • Is there any internal preparation and/or training available around business case creation and the review process?
  • Are there “partners” or “liaisons” I need to work with prior to submitting?
  • Is there a formal business case review meeting I can could listen in on?

This line of questioning should give you a sense for how formal or loose your company’s business case creation and approval process. And don’t forgot one of the most important questions:

  • What is our company’s business case submission and review process? Monthly, quarterly, (gulp) yearly?

That question alone should give you a wealth of insight into what you need to be prepared for in terms of narrowing your business case efforts. To give some sense to the potential polar extremes you might run into, in one small company I worked for, there wasn’t even a formal “business case” process per say.

Business people just grabbed the closest IT person and stated their demands. One month of trying to manage delivering IT solutions amongst this chaos and I knew I needed to get some control and work priority established. After creating a make shift portfolio management function and establishing a weekly, every Tuesday at 9am if I recall correctly, work request and review meeting with critical business stakeholders, the work request and delivery eventually became much more organized. Once organized, the focus of the meeting quickly turned to essentially an informal business case review session. Business stakeholders, competing for limited IT resources, were forced to justify the importance of their needs. Loud voices in the past, used to getting attention and request fulfillment, were now silenced when those requests didn’t stand up to the true revenue generating or expense optimization priorities of the organization.

On the opposite end of the spectrum was a much larger organization that had multiple layers of portfolio and project management offices and functions with a yearly budgeting cycle demanding a Finance department constructed template for every dollar spent on IT. The formal yearly budget was augmented with quarterly and monthly reviews of business cases and associated projects. With multiple funding sources, “line of credit” spending draw downs and “small project” buckets of IT money, there were multiple full-time positions dedicated just to the business case process itself. Sometimes it seemed more time was spent trying to get the proverbial arms around the IT money being asked for and spent than trying to get approved IT projects delivering their tech solutions.

It is absolutely critical you get a basic understanding of where your company stands on this spectrum of informal to highly formal business case review/approval process. Once you have this foundational knowledge, using Wikipedia as a guide, your business case should fundamentally address these basic components:

  • Preface
  • Table of Contents
  • Executive Briefing
  • Recommendation
  • Summary of Results
  • Decision to be Taken
  • Introduction
  • Business Drivers
  • Scope
  • Financial Metrics
  • Analysis
  • Assumptions
  • Cash Flow Statement (NPV)
  • Costs
  • Benefits
  • Risk
  • Strategic Options
  • Opportunity Costs
  • Conclusion, Recommendation, and Next Steps
  • Appendix

Regardless of the adherence to this exact outline, your business case should be addressing the above points in some form or fashion. As I think back on that past employer with the weekly work request meeting, business stakeholders fundamentally addressed all these points in their verbal discussions. Many requests were essentially approved after announcing the bullets under the executive briefing when all around the table knew this work needed to get done. More healthy, sometimes animated, discussions occurred when the executive summary wasn’t compelling and the requester had to verbally spar on the business drivers and financial metrics. Those that had assembled the data and talking points around those business case aspects more often than not got the green light. Those that didn’t do their homework and failed to bring a strong case were stuck in queue for when IT had slack to entertain their request in the future, if ever.

Thus, with some preliminary knowledge on your company’s business case expectations, you can set about gathering all the data points to construct your case. But what may not be immediately obvious is that the business case, though seemingly dry and heavily numbers focused, is actually meant to tell a story. In fact, the story it tells is meant to convince those who are focused on tracking every dime the company spends, that for your request, they should find all the dimes needed to fulfill your recommendation. The next article will dig more into the business case as a story to guide your data collection and construction.

, , , , , , , , , , ,

Again, many thanks to Shim Marom for coming up with the idea and coordinating a flash blog around the topic of project management.  Many contributors wrote some excellent articles on project management.  I was appreciative to be asked to be a contributor and a link to my article can be found here.  Now,  Allen Ruddock, Director of ARRA Management Ltd has compiled all of those excellent articles into an ebook entitled “What does Project Management mean to me – a Project Manager’s Sermon”.

I encourage everyone to download a copy and enjoy all of the great articles on project management!

, , ,

Maybe, just maybe, by some miracle, you work in some IT shop where the mere mention of a new technology investment that “just makes sense” translates into executive support to go forth and procure. I would venture to guess you are not or you wouldn’t be out reading blog posts on how to translate your good ideas into execute-able business cases. The prior article covered adding some management speak to your vocabulary as well as being able to quickly and succinctly articulate the value of your idea for executive consumption. This article will expand upon the concept of executive consumption with the goal of increasing your chances of getting your great idea funded.

As was mentioned in one of the examples in the previous article, the obvious technology advantages of picking ABC vendor over XYZ vendor are made more “executive consumable” by stating the selection goal as the value proposition of gaining a yearly operating cost savings of $100k. By honing in on what executives are prioritizing rather than what you or other technologists would prioritize, you have a much better chance of getting their attention and support. Thus, as you are salivating over the latest technology advance you want the company to use, try and determine how this technology will:

  • Deliver the same or better business capability for a lower overall cost now and/or over time than what exists today.

and/or

  • Solve a new/known business problem and/or project requirement quicker, better and/or cheaper.

Note, there is quite a liberal use of the word “business” in these choices. Cool tech for the sake of cool tech rarely, if ever, is the justification for a business to spend money. The list could be expanded upon and hopefully someone will add via comments to this article. But as I was really thinking about it, these two represented the two fundamental value propositions any technology investment recommendation should address. The first is something being done today with technology and you are proposing it can be either done for a lower cost or done better for the same cost. The second is something that is needed because it isn’t being done today and what you are proposing will delivery a solution for that need that will take less time to implement, be at a lower cost or have more rich features for the same cost as an alternative recommendation.

But my tech is just the right tech?

It very well absolutely could be the right technology and an army of your peers and industry experts are willing to agree with you. The case I am making is that once you determine the best technology, devote some energy to determining how your technology can deliver within those executive digestible parameters mentioned above for the best chance of your recommendation getting green-lighted.

So if everyone agrees my tech is correct, why do I have to do this extra business-y work?

How many times have you seen solid technology people know that a particular product or vendor is the best choice for a given need only to discover someone else (“management”, “crazy executive”) went off and bought an inferior product. I would venture a guess that 4 out of 5 situations like these occurred because there was no solid “business case” that clearly outlined the value proposition for the best technology in front of that person. I say 4 because even with the most bullet proof “business case”, there are extenuating circumstances that are beyond every technologists’ control. The classic “quid pro quo” example comes to mind where your company buys a product from vendor A because vendor A agrees to buy a product/service from your company. The value proposition for the revenue stream and overall partnership relationship of such arrangements can overshadow a classic cost/time/value business case.

Ok, so you are starting to buy into needing to expand beyond pure “buy it because it is the best technology” notion. How does one produce a “business case” that outlines the executive consumable value proposition I have in mind? The next article will dig more into assembling such a business case.

, , , , , , , , , , ,

Maybe, just maybe, by some miracle, you work in some IT shop where the mere mention of a new technology investment that “just makes sense” translates into executive support to go forth and procure. I would venture to guess you are not or you wouldn’t be out reading blog posts on how to translate your good ideas into execute-able business cases. IT shops these days are trying to balance all of the competing priorities of keeping the tech they have running, implementing new solutions the business is demanding all while being forced to maintain at best a flat budget level year over year. I am also going to clear the air early in this post by stating that even if you have the most compelling business case for a new tech investment that will clearly pay dividends year after year, there still may not be the management appetite to invest. But, you can make your case more compelling by relating the value proposition to challenges that executives can better understand. “I like what you’ve put together here but we don’t have room in the current budget …” is still a win for any technology professional.

Learn to appreciate some new terminology

Now, as an engineer, if you are rolling your eyes at words like “business case” and “value proposition”, unfortunately, those words are part of the language of management. Like it or not, at some point in your career, if you want to get someone with the purse strings to give you the funds to implement your great and wonderful idea, you are going to have to pitch that idea. As an example, think of all the times someone has approached you about your area of tech mastery and you brushed them aside because they clearly couldn’t demonstrate competence? How did you conclude their neophyte-ism? I bet they didn’t use words and phrases that you could immediately relate. They didn’t use language you yourself use every day to talk about your work. Thus, if you want a better chance that your cool new tech investment will get action, consider adopting less techie and more business-y language and start selling!

Hone in on the major value proposition quickly

So, yes, roll your eyes again at the phrase “value proposition”. What the phrase really means in this context is “what am I going to get if I do what you say?” Over simplified, the major value proposition of your car is that it gets you safely and swiftly to work each day. The minor value can be, depending on the vehicle you choose, creating a specific personal image or the ability to haul a yard of mulch for your weekend landscaping project. Make sure you can quickly and easily outline the major value proposition of your idea. Having a laundry list of additional benefits is nice, but in today’s hectic executive schedule, being able to quickly identify “what am I going to get” from your pitch is extremely important. “You want me to spend X in order to get Y” is what you are targeting. If an executive has to spend even a few minutes trying to understand what you are asking for or what the benefits are, you’ve already lost some credibility in selling your idea. “But my idea is so awesome, every exec should be willing to listen to me extol the cavalcade of benefits!” Believe me, execs are getting slammed with staff, vendors and IT pundits telling them where to spend their limited funds. You need to hone in on the major value proposition quickly and efficiently to lock in an exec’s interest.

Examples to better grab your executives attention:

  • Upgrading the Flim-Flam application to the latest version will close out 15 features or 45% of the business’s backlog
  • Purchasing ABC application will automate 3 manual steps per widget in the workflow resulting in a 4 FTE savings each month
  • Selecting ABC vendor instead of XYZ vendor will result in a ~$100k total operating cost savings annually

Note there is an obvious lack of significant techno-speak. Sure, the latest Flim-Flam version has a fully re-written SOA services layer. The ABC application can be completely virtualized and deployed within your new private cloud. ABC vendor is using the latest technology and has all these cool new mobile features where as XYZ is still a legacy client-server fat client architecture. Remember what is going through the executive’s mind that you are pitching this idea to: “what am I going to get for saying yes to this idea?” If you can’t describe the succinct value of your pitch in executive digestible terms, as I’ve tried to make my case in this article, you are less likely to convince the one with the purse strings to spend on your idea.

Look for the next article to expound upon what “executive digestible” really means to get stronger buy-in to your idea.

, , , , , , , , , , ,

As those that read my blog (please click on the About This Author link if you haven’t), I primarily focus on corporate IT concepts in large organizations that consume plenty of IT, but IT isn’t the company’s core product or service. Projects and project managers play the role of herding the proverbial cats in order to deliver material IT change in these large environments. With that being said, project management in large organizations tends to be exceedingly challenging. Project Management Offices are staffed with folks trying to implement appropriate processes such as Project Life-Cycles (PLC) and Software Development Life-Cycles (SDLC) with all kinds of project toll gates to try and monitor project spend as well as quality metrics and other such governance structures. Additionally, Project Managers report to Program Managers that report to Portfolio Managers and into Enterprise PMOs in these matrix-ed/dotted line organizations. On top of that, reporting structures are constantly vacillating between a central pool of project management talent everyone draws from to talent directly reporting into the IT solution delivery teams. With hundreds and thousands of IT workers all trying to get work done, implementing change while trying to maintain the stability of production services makes strong project managers critical to the successful delivery of change. Thus, when Shim Marom asked if I wanted to participate in this flashblog on the topic of “What does Project Management mean to me”, I jumped at the chance to add my voice in with all of the excellent bloggers Shim has assembled on this topic.

What does Project Management mean to me? Or …

The three attributes of IT project management that make an effective project manager stand out amongst their peers.

1. Knowledge of the PLC/SDLC, but more importantly, the processes behind the processes

An attribute that makes a project manager effective in their role in a large IT shop is knowing all of the project processes inside and out. The project manager essentially helps guide the core project team members through those formal processes such as funding tollgates, quality milestones, as well as project and technical reviews. If an engineer has to stop engineering things in order to determine what document or form they need to fill-out in order to request a review of some deliverable or artifact they don’t know needs to contain who knows what content, it adds considerable stress on to the engineer as well as adds delay and overall confusion to the project team. For a project manager to be effective, knowing these processes thoroughly is essentially table stakes in a large IT shop.

Now, what makes a project manager excel in their role is knowing all the processes behind the processes and all the people that can help move those process steps forward. The majority of these project process steps involve someone or some group that needs to hear or see certain information in they way they are used to seeing or hearing it in order to approve the project team to move forward or assign a critical resource to complete a task. For example, Sally in “Project Accounting” needs to have a certain spreadsheet filled out a certain way for these certain type of hours to be accounted for in this cell and those other hours accounted for in that other cell. When Sally gets that spreadsheet filled out in exactly the way she is used to seeing it, she can quickly push the “Approve” button in the project management system that enables, say, corporate procurement to indicate the vendor on the project will get paid and thus the vendor can start working. When Sally has to explain exactly what she needs in order to push that button, it is typically impossible to get a hold of her to join a meeting and when she does, she confuses everyone with her extreme accounting lingo that no engineer can comprehend and thus begins the rinse and repeat cycle of throwing darts at the spreadsheet in hopes you get Sally what she needs. Being able to support the project team with this critical knowledge of how to get through the “Sally-gate” with the minimum of fuss is what makes a project manager excel in a large IT shop. A project manager with this type of value add is constantly in demand and frequently requested to lead projects to the point of having to turn work away in my experience.

2. Ability to translate a technical goal into the bare minimum of project steps to complete

Another attribute of a strong project manager is their ability to understand a project team’s technical goal and be able to translate that into the bare minimum project steps to achieve that goal. Here is an example project team conversation that illuminates this attribute:

Joe Project Technical Lead = “Ok, engineer Bob just found this software component that looks to do exactly what we thought we would have to pay the vendor to do in their product. We need to get this into the test environment in order to see it interact enough in some real world scenarios in order to make the call on using it or go back to the original expensive vendor option. We don’t have enough data and integrated systems in the dev environment to really determine if this is gonna work. How can we avoid all the testing and validation steps that the SDLC says the testing team needs before giving the green light to install this for us to use in Test?”

Project Manager = “Well, because of all the production problems recently, those testing steps we used to be able to get a pass on are now absolutely enforced. I haven’t heard of anyone in the last month getting a pass to skip a single step.”

Joe Project Technical Lead = “We don’t have two months to go through all the rounds of testing of a new component just to find out it is crap.”

Project Manager = “Ok, I have a plan.”

Joe Project Technical Lead = “I’m all ears.”

Project Manager = “Have Bob ask Judy in Operations to open a trouble ticket on the Flim-Flam app. Have Bob give you the trouble ticket number he gets from Judy. Then you call Tim in the support team and let him know you have a patch for that trouble ticket. If we call it a patch not a new component, Tim’s team can install it in the Test environment as a ‘production defect resolution’. Have Bob install it in Dev, write up the patch install documentation and attach that doc to the trouble ticket resolution section. Once Bob has done that, you can call Tim and ask for a resource from his team to install the patch indicating the docs are attached to the ticket. If I call, it will look like the project is making the request. If you call, it is Development providing a defect fix. Then have Bob go through the emergency development access process to the Test environment after Tim’s resource has updated the trouble ticket with a patch installed status. Bob can do whatever testing you think you need to make the call if it is gonna work or not.”

Joe Project Technical Lead = “Do you think Judy is going to go along with this plan? In order to back all this out, she is going to have to call back in and say the trouble ticket can be closed because the defect wasn’t an actual defect.”

Project Manager = “Yes, I’ve worked with Judy before. Plus, her team benefits from some of the new functionality delivered in this project phase, so she has a vested interest in helping us push this forward. Plus, Tim is under pressure to show progress in trouble ticket closure metrics, thus he is going to want to get an offshore resource engaged to close this new ticket quickly.”

Joe Project Technical Lead = “Ok. I’ll grab Bob and fill him in on the plan.”

A project manager that just reiterates the formal process maybe doing their job, but a project manager that knows how to translate a project goal, in this example, additional hands-on confidence in a change to the project solution, brings real project management value to the project team.

3. Attention to detail and follow through

In trying to narrow down to three strong project management attributes a project manager needs to have to excel at project management in a large IT shop, having a strong attention to detail and follow through may seem, again, table stakes for all project managers. In my experience, there is a constant state of noise surrounding a corporate IT project that needs constant squelching. In contrast, short running projects, of which there are very few in large shops, can usual squeak by with minimal outside interference. That minimal interference can usually be addressed by an average project manager. Projects that run many months or years don’t have that luxury.

For long running projects, it is absolutely critical for the project manager to be completely on top of all the noise and know who to engage to ensure the noise can be ignored or if the noise represents a material impact to the project. One example of noise is a newly proposed enterprise component that everyone needs to use that, on the surface, sounds like a critical path item for the project team, in reality, has no funding support and no project toll gate or review that will enforce its use. Such noise, once determined to be true noise, needs to be cast out of scope to keep the project resources focused on delivery as quickly as possible. An example of noise that can’t be ignored might be a new project funding review activity that has enough executive support to warrant proactive insertion of the project into the review pipeline ASAP to ensure smooth sailing through the new process. Ignoring such noise results in the discovery, down the road, of a roadblock when the project is at a critical milestone. Scrambling resources in a finance “fire drill” activity late in the project is obviously inefficient. Calls of “How come we didn’t know about this sooner? Does this put the delivery date in jeopardy?” from the project sponsor cast considerable doubt on the effectiveness of the project manager.

Thus, seemingly table stakes for a project manager to be attentive to details, the larger the project the more the need for a project manager to be vetting out details and sorting out noise from real project activities. Project managers that have the skills and intra-company relationships to quick vet the noise and squelch or engage efficiently excel at delivering projects on time, on budget and without undue stress on the project team.

P.S. This post is published as part of a first ever project management related global blogging initiative to publish a post on a common theme at exactly the same time. Seventy four (74!) bloggers from Australia, Canada, Colombia, Denmark, France, Italy, Mexico, Netherlands, Poland, Portugal, Singapore, South Africa, Spain, UK and the USA have committed to make a blogging contribution and the fruit of their labor is now (literally NOW) available all over the web. The complete list of all participating blogs is found here so please go and check them out!

, , ,

In keeping up with my personal trend of being an early technology investigator yet a late technology adopter, I made the leap into tablet computing later than most. I’ve written about my initial tablet computing experiences here, here and here. As you can tell, all around, I was extremely impressed by what Blackberry did with the PlayBook and its tight integration with their mobile phones through their “Bridge” application. The ability to use two devices yet share email, notes, calendars, contacts and network access seamlessly was extremely personally productive for me. Thus, when Blackberry announced the new Z10 phone that would run Blackberry’s new Blackberry 10 (BB10) operating system and support for 4G, I thought it was a good time to take advantage of my carrier’s aggressive reminders I was due for a new phone.

Just like my PlayBook experience, contrary to the industry punditry opinion, I was quite impressed with the Z10. Simple to operate, able to find all the basic productivity apps that I had come to rely on in the app store. Connecting the Z10 to the PlayBook continued to be a smooth process. So, with building enthusiasm I downloaded the “Bridge” app and looked forward to continued multi-device synchronous productivity.

Nope.

Gone were those lovely calendar, contacts and email icons I had come to reply on. Replaced with a text file that stated something to the effect of “icons will return in the next version”. Ok, don’t panic. With all Blackberry has invested in the new phones and operating systems and me being an uncharacteristically for me early adopter, I’ll be patient and I’ll be back to full functionality in no time. The phone’s data plan is still extended through Bluetooth to the Playbook, so Blackberry didn’t take everything away. Unfortunately, I’m quick to discover that only native PlayBook apps can use that tethered data service. Non-native apps can’t and if I really want those to connect I have to fumble with making the phone a wifi hot spot.

Confidence builds as Blackberry makes announcement after announcement of the future platform and product investments. “Blackberry 10 coming to the PlayBook” was the most intriguing. The “hub” concept Blackberry introduced of bring everything, alters, emails, tweets, calendar events, Facebook, Linked In, etc. notifications into one easy to sort and manage list being replicated on the PlayBook and synchronized? Ok, I’m thinking Blackberry is really looking to support the productivity focused, not game distracted, mobile device user.

Nope.

Then the announcement that I immediately new would direct me away from further Blackberry product investments: no BB10 coming to the PlayBook. To top that, additionally, no further investment in the PlayBook itself. With the challenges of keeping email and contact data between two devices that used just magically handle that for me coupled with the screen flickering and going out from all the abuse my poor tablet has suffered in my travels, I knew what my next decision would be:

My next mobile device purchase would not be a Blackberry product.

So, resigned that what was evolving into a very productive set of tools is not going to materialize, I proceeded to get my hands on Google’s new Nexus 7” tablet. The PlayBook now sits on the corner of my desk leaving me wondering what I should do with it since it was incredibly useful but the industry has propelled forward without bring it along. Old computer guy “get off my lawn” warning: anyone remember the Apple IIgs?

, , , , , , , , , , , , , , , , ,

As I started flipping through “Cloud Computing – Concepts, Technology and Architecture” (Amazon link) from Prentice Hall Pearson Education, Inc. publishing, I immediately knew this was not a light weight, marketing heavy coverage of the “cloud” topic. Since “cloud” became a marketing mantra a few years back I’ve been a bit critical in responding to the on-slaught of cloud computing “game changing” claims by IT pundits in blog comments as well as my own blog post on the subject here. As I completed my initial skimming of the text, I was very impressed with the authors’ complete lack of any cloud sales theme anywhere in the book. Thus, initially relieved this would be a serious text on cloud computing, I sat back with note taking at the ready to dive into reading “Cloud Computer – Concepts, Technology and Architecture” by Thomas Erl, Ricardo Puttini, and Zaigham Mahmood.

Review Summary

Free from all the “cloud” hype, this book provides the cloud fundamentals that enables the IT practitioner to cut right through the vendor sales pitches and make effective decisions on efficiently leveraging cloud services for their business needs. Covering the gamut of cloud business use cases from startups building out their IT infrastructure in the cloud to providing hosting services to government entities as a cloud provider, the book gives the reader every angle of cloud. Using a text book like format of theory related to practical use cases related to three case studies running throughout, the authors have a very effective structure to give the reader ample means of comprehension. The generous use of pictures (~260) provides the reader a compliment to the narrative explanation of the many cloud architecture concepts outlined. Starting with the basics and moving to exceedingly advanced cloud service delivery models, the book both helps novice readers as well as advanced practitioners glean valuable insights into cloud architecture models. All in all, a well constructed and thoroughly researched book. The only glaring gap I discovered was the relatively light and mostly theoretical security of the cloud coverage. For being one of the most misunderstood and challenging barriers to cloud adoption, I was left disappointed with the mild coverage of security juxtaposed to the expansive and deep coverage of all the other cloud topics. I strongly recommend this book to both novice and advanced practitioners interested in ensuring they have the most broad understanding of cloud. For those in need of a deep cloud security education, you might find this book lacking.

Review at Length

First and foremost, if you are looking for a light read that is going to just touch on the high level cloud computing topics, this is definitely not the book for you. Structured more like a college text, part way through I was thinking this would make an excellent book for a college course on cloud computing. The only thing missing, in my opinion, to make this perfect for academia is a section on the history of computing from the perspective of mainframe to mini to pc/server to virtual machines to today’s cloud platforms. The few paragraphs covering history in the beginning of the book wouldn’t be enough to really bring students the full spectrum of cloud evolution.

The use of three case studies that represent three different business needs for cloud was very effectively used through out the text. The companies and their cloud business interests were pointed out in the beginning of the book and then periodically referenced through out as the subject matter overlapped with a component of the case study. I found this to be a great technique to re-enforce the topic the authors were addressing by mapping the cloud theory outlined to the real world application via the case studies.

Additionally, the authors used architectural drawings frequently (~260) to support the textual descriptions of many of the concepts. This was another very effective way to re-enforce the concepts the authors were trying to convey. The Arthur Brisbane quote: “Use a picture. It is worth a thousand words.” definitely applies. One can try to describe network traffic load balancing across multiple data center in words, but a simple picture can solidify the concept much more effectively with a technical audience.

The authors combined the very effective:

1. Tell the reader what they are going to learn in the next section
2. Verbose topic explanation in the that next section
3. Followed by a summary of what you just read/learned in that section

Joined with narrative supported by architectural drawings (pictures) supported by case study references, the authors have a great format to ensuring the reader gets multiple views on the important topics to increase comprehension.

With cloud computing being such a broad topic with so much confusing and contradictory material about what is and isn’t cloud out there in the market, the authors start right in chapter one with collecting the various definitions and normalizing one for the reader.. This was a great place to start. Also, the quote the authors selected from John McCarthy on page 26 is the one I frequently use what I present on security topics around cloud computing (such as this one of my presentations). I had to enthusiastically smile at that quote choice.

Part I – “Fundamental Cloud Computing”

The first part of the book entitled “Fundamental Cloud Computing” is extremely well done and indeed provides a fundamental grounding of a comprehensive list of cloud computing topics. I found myself frequently wishing I had this material back a few years ago when the “cloud evangelists” came on the IT scene touting all the marvelous disruptive game changing advantages of “cloud”. The “X-as-a-service” section leaves no ambiguity to these frequently confused definitions of cloud functional delivery capabilities. Given my current focus on information security in my day job, I was highly interested in the sixth chapter on fundamental cloud security. Upon completion of chapter six, I felt the authors touched on the basics and effectively left me wanting to know more details which I hoped would be covered later in chapter ten. As most know, there is plenty of concern around the security ramifications of moving your valuable computing and data from your technology in your data center to some magic data center in the proverbial sky.

Part II – “Cloud Computing Mechanisms”

The second part of the book entitled “Cloud Computing Mechanisms” builds upon the basic concepts in the first part to give the reader the next level of deeper understanding of what goes into delivering computing services via the cloud. All the fundamental assemblies of redundant technologies for high availability and network scale and load balancing are covered to a level of depth that the IT generalist can appreciate. Here, again, is where the linkage to the case studies proves exceptionally effective to connect the theory to the real world.

The “ready made environment” that was covered in chapter six was a great approach to use to educate the reader. Having spent quite a bit of time research the cloud computing topic since its recent “marketecture” invention, I was easily able to follow the authors. Yet, at the same time, as an advanced reader, I have some concerns that a more novice reader won’t be able to immediately grasp the concept. This concept is an important one: the ready made environment compared to the legacy corporate IT process and new build technology associated with standing up a business consumable computing platform. I think here, some reference to the case study involving a corporate IT shop looking for agility in the cloud would have been additionally helpful.

In general, barring the ready made environment comments prior, the authors did a great job building on the fundamentals in the prior section with one notable exception: chapter ten’s coverage of security mechanisms. As I read chapter ten, I was really looking forward to the same level of pragmatic coverage of IT infrastructure fundamentals mapped to the cloud as with the chapters seven through nine. What I experienced was what I would classify as more general security topics rather than specific cloud adaptations. The concepts of hashing and encryption and single sign-on were well defined, but I felt as if that entire chapter could have been cut and paste into a “security fundamentals 101” book for general security practitioners. There didn’t appear to be a strong direct mapping of the security concepts to practical use in cloud situations. In trying to be fair with my significant investment in my own knowledge and experience in IT security over the course of my career, I didn’t expect to be presented with ground breaking security reference architectures. Yet, I felt as if the authors failed to provide enough useful guidance to the reader on one of the most foundational barriers to the use of cloud: security. Thus, if I had to express one glaring gap or weakness in the entire book it would be the lack of depth in the security coverage compared to the effective coverage of the single-on security-centric cloud topics.

Part III – Cloud Computing Architecture

The third part of the book entitled “Cloud Computing Architecture” further assembles the prior section’s components into more cohesive architecture models. Each model is well described in what technology focused services it provides. I was able to clearly follow the authors’ description of each model. What would have helped me and I assume other readers is more associations with real world or relate-able uses of each model. Where the authors’ did effectively relate the theory to case studies and practical examples in prior sections, this section seemed very heavy on theory with little to no case study/example linkages. Also, as I was reading, I was hoping the authors were going to provide some sort of model interrelation or a matrix that helps the reader compare the contrast each model. Maybe the enterprise architect in me was looking for pros and cons or ways to easily determine which model applied most directly to what business need. By not finding that, I was left a bit disappointed. In general, this section had great content but it didn’t seem as well constructed and represented as the other sections.

Part IV – Working with Clouds

The fourth part of the book entitled “Working with Clouds” returns to the authors’ strength of turning complex cloud topics into easily understandable, logical and relate-able constructs. The cloud delivery models in chapter 14 were very well described and referenced back to one of the case studies. The detailed pricing and charge back handling of chapter 15 was excellent. Additionally, the SLAs (service level agreement) coverage in chapter 16 was comprehensive and thorough. Again, multiple case studies were referred to in this section’s chapters that really help to match the theory to the practical application.

Appendices

The collection of appendices represent some specific topics that didn’t quite fit neatly with the coverage and flow of the bulk of the text. Having the conclusions to the case studies introduced and periodically referenced in the prior sections was a great way to bring closure. The list of cloud related standards bodies and organizations makes for a handy reference. I was impressed to see an appendix dedicated to contract language. Finally, the business case appendix was a little light in content, but it definitely gets one a framework by which to start collecting data to support justifying a cloud investment.

Full table of contents:

Chapter 1: Introduction
Chapter 2: Case Study Background

Part I: Fundamental Cloud Computing
Chapter 3: Understanding Cloud Computing
Chapter 4: Fundamental Concepts and Modules
Chapter 5: Cloud-Enabling Technology
Chapter 6: Fundamental Cloud Security

Part II: Cloud Computing Mechanisms
Chapter 7: Cloud Infrastructure Mechanisms
Chapter 8: Specialized Cloud Mechanisms
Chapter 9: Cloud Management Mechanisms
Chapter 10: Cloud Security Mechanisms

Part III: Cloud Computing Architecture
Chapter 11: Fundamental Cloud Architectures
Chapter 12: Advanced Cloud Architectures
Chapter 13: Specialized Cloud Architectures

Part IV: Working With Clouds
Chapter 14: Cloud Delivery Model Considerations
Chapter 15: Cost Metrics and Pricing Models
Chapter 16: Service Quality Metrics and SLAs

Part V: Appendices
Appendix A: Case Study Conclusions
Appendix B: Industry Standards Organizations
Appendix C: Mapping Mechanisms to Characteristics
Appendix D: Data Center Facilities (TIA-942)
Appendix E: Emerging Technologies
Appendix F: Cloud Provisioning Contracts
Appendix G: Cloud Business Case Template

 

, , , , , , , ,