Error Pop-Up - Close Button Sorry, you must be a member of the group to do that. Join this group.

Rod Collins's Blog, page 15

December 9, 2014

Breaking Down Big Data: The Value In Metadata

by John Horodyski


 


I never met a data that I didn’t like.” — Internet Meme


As a Partner at Optimity Advisors, my role is to work with clients to make data likeable: identifiable, discoverable, usable and ultimately, valuable. Companies are struggling to manage big data in a landscape of rapidly increasing production and diverse formats. The ability to collect and analyze internal and external data can dictate how well an organization will generate knowledge, and ultimately value. How can you start planning for this value?


Gary Drenik argues in his recent Forbes article,


“How to beat the big data giant? Start by thinking little data, as in David vs. Goliath. The first step in the little data process is to identify key business objectives that your organization would like to have data solve. Objectives need to be clearly defined … once defined, objectives serve as a roadmap for the identification of relevant data sourcing to generate new insights for evidence-based decision making by executive team members.”


Metadata is the best way to identify little data that becomes big data. Little data provides structure to what becomes big data. Invest the time, energy and resources to identify, define and organize your assets for discovery and increase their value.


What’s the Big Deal About Big Data?


Big data refers to data sets that are too large and complex to manipulate or interrogate with standard methods or tools. Information — and all its data and digital assets — has become more available, accessible and in some ways, more accountable in business. To better understand the big deal about big data, start with an understanding of the associated terms, which include:



Metadata simply stated is information that describes other data; essentially, data about data. It is the descriptive, administrative and structural data that defines assets.
Taxonomy is the classification of information into groups or classes that share similar characteristics. It provides the consistency and control in language that can power the single source of truth as expressed in a DAM or CMS and is a key enabler for organizing any large body of content.
A Controlled vocabulary is a set of defined terms that populate a drop-down or pick list. Establishing “preferred terms” is a good way to provide control, authority and consistency to your digital assets. You not only need to know what it is you are describing but how it may best be described.
Structured data refers to information with a good level of control and organization, for example, a “date” value in an “Expiration Date” field. Structured data is usually found in a controlled data environment with inherent meaning and purpose.
Unstructured data lacks that control and meaning, offers a confused sense of purpose and requires analysis or interpretation to restore meaning. Using the example above, if a “date” is discovered with no “field” in which to provide that control and structure, what does that tell you? Wrangling all that data will create a more structured sense of purpose for the content in your organization. It makes information more relevant, palpable, understandable and useable.
Master Data is business critical data that is governed and shared across multiple systems, applications or departments within an organization. Master Data can be identifiers, attributes, relationships, reference data and yes, metadata!
Master Data Management (MDM) is the set of processes, tools and governance standards/policies that consistently define, manage and distribute Master Data. Everything starts with data modeling, and data modeling is inherently tied to metadata (ISO-IEC 11179).

The value that metadata, or little data, brings to big data is in the structure and meaning it provides. It serves asset discovery by identifying assets and allowing them to be found by relevant criteria. Metadata also brings similar assets together and distinguishes dissimilar assets. Value is added by managing data.


Big Data Challenges – ‘I Still Haven’t Found What I’m Looking For’


We have an unprecedented wealth of data at our discretion and under considerable watch and scrutiny from creators, users and stakeholders. Organizations need to change accordingly to respond and create new solutions. The challenge with big data is how to manage it. That includes everything from:



identification
capture
curation
storage
search
sharing
analysis

Big data will only continue to grow. The emergence of the Internet of Things and new platforms will produce more information and data in locations both within and external to your business. Increasing an understanding of data and repositories will protect the organization from:



Savvy plaintiffs will request big data in discovery
Inadvertent data ingestion = data breaches
Consumers will lose confidence in data protection upon realization that personal data is everywhere

There has never been a more important time to make data a priority in your strategic planning.


Best Practices


For managing metadata and digital assets in business:



Metadata management and planning for new process or systems
Clear ownership and absent documentation with digital assets
Current documentation on metadata or controlled vocabulary

Determining the right questions to ask about best practices will establish if these practices are in place. While enterprise solution providers have not delivered on many tasks that could be automated, new platforms provide great opportunities for communication/engagement/risk management. Additionally, social media and a variety of other social collaboration tools will affect the workplace, blurring the boundaries of how and when business is conducted. Data sharing and collaboration will play an important part in this growth.


The ability to collect and analyze internal and external data can dictate how well an organization will generate knowledge and ultimately value. How can you start planning for this value? A few things to start working towards include:


1. Data Assessment & Organization Planning



Inventory and discover data life cycles and users
Creating a well-planned data warehouse model ensures valuable enterprise-wide information and metrics, as well as good performance and provisions for growth

2. Capability Assessment & Gap Analysis



Produce maps linking data to business processes and validate
Determine areas where redundant, obsolete or transient data reduction may be happening. You need to ensure that data in transition is handled accurately and implemented quickly to meet the speed of your business.

3. Modeling & Analysis



Develop plans to facilitate analysis and future action for an operational system or creating visualizations and reports for your teams and interest groups, formatted as they need them and delivered when they need them.

Data must be delivered consistently, with standard definitions, and provide the ability to reconcile data models from various systems or data marts.


Making Value 


The struggle to manage information within the big data landscape is as complex as the digital workflows it supports. This landscape includes the internal ecosystem and the wider geography of partners and third-party entities. The complexity of all of the available data is compounded with the increasing rate of production and diversity of formats.


Assets are critical to your business operations — they need to be discovered at all points of the digital lifecycle. Key to building trust in your data is ensuring its accuracy and usability. Leveraging meaningful metadata provides your best chance for a return on investment on the assets created and becomes an essential line of defense against lost opportunities. Your users’ digital experience is based on their ability to identify, discover and experience your brand in the way it was intended. Value is not found — it’s made — so make the data meaningful to you, your users and your organization by managing it well.


 



 


John Horodyski @jhorodyski is a Partner within the Media & Entertainment practice at Optimity Advisors, focusing on Digital Asset Management, Metadata and Taxonomy.

 •  0 comments  •  flag
Share on Twitter
Published on December 09, 2014 06:00

December 2, 2014

Being Adaptive Means Cultivating A Little Chaos

by Rod Collins


 


There are fewer words that will strike utter fear in seasoned managers more than the word “chaos.” For most managers, chaos is, by definition, the absence of management. It’s what happens when managers lose control. It’s something to be avoided at all costs because its inevitable attributes of unpredictability and uncertainty are the mortal enemies of efficiency.


For the past century, the hallmark of management excellence has been efficiency. Thus, once a business achieves market dominance, management’s focus is all about sustaining its competitive advantage by cutting costs, eliminating variances, improving economies of scale. When efficiency is the principle preoccupation of management, stability is the ultimate state of effective management.


But what happens when stability is not an option? What happens when the world is suddenly  engulfed by a rapid pace of change and the most effective organizations are not necessarily the most efficient but rather the ones who are most adaptive? These were the questions that were on the mind of General Martin Dempsey when he reached out to Ori Brafman, after reading Brafman’s book The Starfish and the Spider, which described how leaderless organizations are oftentimes more adaptable than their centrally controlled counterparts.


The military, like most traditional organizations, believed efficiency was the pathway to excellence. However, given the realities of modern warfare, Dempsey was becoming increasingly aware that excellence on the battlefield had more to do with adaptability than efficiency. Like so many organizational leaders today, Dempsey was coming to terms with the realities of a rapidly changing world. In Dempsey’s first meeting with Brafman, the General related his concern that the army would need to change if its success depended upon its ability to quickly adapt. Dempsey recognized that imagination and innovation were the key ingredients of adaptability, and he was concerned that the army was deficient in both. When he asked Brafman what he needed to do to make the army more adaptive, the author’s response was a simple strategy: “Make the army more chaotic.”


An Unlikely Collaboration


In his latest book The Chaos Imperative, co-authored with Julian Pollack, Brafman relates the story of the unlikely collaboration between the career army general and the vegan Berkeley alumnus and how they used the power of chaos to increase the army’s excellence.


Getting the army to embrace chaos as a resource was no easy task. According to Brafman, “We tend to confront chaos as if it were an unruly beast—something to be contained as much as possible.” However, in attempting to squelch chaos, leaders often inadvertently stifle the new ideas that are the seeds of innovation and the pathways to future growth. Dempsey knew the army needed better and quicker avenues to new ideas, so he was open to learning different—maybe even uncomfortable—ways of thinking and acting.


An Emergent Process


In encouraging the army to become more chaotic, Brafman was not asking the soldiers to abandon all semblance of order, but rather to become proficient in a discipline he called “contained chaos.” While the notion that chaos can be managed in a disciplined way may sound counterintuitive, Brafman contends that “a little bit of chaos, encouraged but confined within borders, can be highly beneficial to an organization’s overall health.”


The discipline of “contained chaos” is an emergent process that involves three elements: white space, unusual suspects, and organized serendipity. White space is a time, place, or system unfettered by an established structure where people are free to self-organize, brainstorm, follow their imaginations, and make up their own rules. Interestingly, these are also the fundamental elements of “play.” Learning to value white space can be challenging for traditional organizational leaders who’ve come to believe that play is the antithesis of work. Nevertheless, a close look at today’s most well-run businesses reveals that their leaders understand that play is oftentimes a key component of work, which explains why Ping-Pong and pool tables have suddenly become standard office furniture in innovative companies. These amenities are not distractions from work, but rather the white space that spawns the emergence of new ideas.


The second element is the introduction of “unusual suspects.” These are outsiders who are not part of the usual group. These are the non-experts who don’t share the preconceived notions that make up the foundational knowledge of the experts. Experts, by definition, are those who know how things work and how things are done, given the current state of affairs. In a stabile world, their knowledge is paramount. However, in a rapidly changing world, expert understanding can become a treacherous obstacle to change. That’s because the effective response to change often requires us to be innovative and to come up with ways to do things differently or even to do different things. The essence of innovation is the connecting of unusual things, such as a telephone and the Internet. If you want to enable unusual connections, include unusual suspects when working to solve your most important problems.


The final element is organized serendipity. When there is sufficient white space and the inclusion of unusual suspects in the problem solving process, the circumstances are ripe for “new and creative ideas to emerge out of nowhere.” Brafman describes this attribute as the paradox of chaos. By cultivating a little chaos, leaders set the conditions for serendipity to happen.  This entails creating “pockets of chaos,” where leaders don’t give directions, but rather facilitate creative group processes from which better quality directions emerge from the collaboration of people with very different perspectives. When leading organized serendipity, “structure and efficiency are set aside or blocked off to create a more organic process that allows new ideas to come to the fore.”


An Intelligence Advantage


In embracing Brafman’s advice to make the army more chaotic, General Dempsey and his soldiers have learned to become more adaptable. They have come to understand the limits of planning, the value of disruptive thinking, and the power of organized serendipity when faced with the challenges of unconventional warfare.  By learning how to create white space, include unusual suspects, and facilitate organized serendipity, what has emerged are not only more adaptive and effective solutions but also the most important intelligence needed to succeed in rapidly changing environments: the knowledge of what they didn’t know that they didn’t know. When an organization has the wherewithal to effectively manage chaos and the capacity to uncover the “unknown unknowns” before taking action, they create for themselves a huge intelligence advantage, which in turn becomes both an adaptive and a competitive advantage in mastering a rapidly changing world.



 


Rod Collins  (@collinsrod) is Director of Innovation at Optimity Advisors and author of Wiki Management: A Revolutionary New Model for a Rapidly Changing and Collaborative World (AMACOM Books, 2014).

 •  0 comments  •  flag
Share on Twitter
Published on December 02, 2014 03:00

November 18, 2014

Transformation by Design: Employees at the Epicenter of Corporate Restructuring

by Carol Huggins


 


When business leaders decide to restructure or reorganize, they are transforming the company and, therefore, must carefully consider the design of such a change. Like an architect who takes months or years to thoughtfully design a building, a company must intricately and carefully plan for change. As they do so, they need to keep in mind the most critical component of an organizational redesign are the employees, whose performance can make or break a company.


Whether a start up or legacy corporation, the primary driver behind reorganization is usually profitable growth. Of course, such plans may also result from a desire for improvements in customer service performance or production quotas. But, most importantly, and often with little consideration, it’s how the restructuring will impact the people that matters most, which is why the emphasis on concise planning and thoughtful design is so important.


Many senior managers simply focus on the what, the desired end goal of organizational change: Improved financial performance as a result of restructuring and resizing. They often ignore the human factor in such a modification. Instead, it’s the innovators who bolster the objectives by carefully composing how employees will proactively participate in reorganization and impact the bottom line, and how they will interact for the good of the firm after the change has been implemented.


Throughout my career, I’ve both been directly involved with and witnessed organizational change. In one instance, as a lodging company prepared to go public in the early 1990’s, leadership made the strategic decision to streamline operations and eliminate the Assistant Manager position, thereby empowering non-exempt employees with such tasks as resolving customer disputes and making bank deposits. Imagine an hourly employee deciding to refund a dissatisfied guest! But, it worked. The hourly workforce was invigorated, and property managers now had home office support for what were then unconventional solutions.


On another occasion, I observed a division of a Fortune 500 company as it implemented a mass reorganization. In this instance, employees felt under informed and many worried about job security. As the implementation began, a good number were in limbo for months, “officially displaced” as managers competed for their skills and loyalty. It was evident that minimal consideration had been put into the human aspects of the organizational redesign and delivery.


A revolutionary example where people are at the center of transformational change is the recent decision by Zappos to embrace Holacracy, which, according to Forbes magazine, is “a New Age approach to leadership that involves no job titles, no formal bosses, and lots of overlapping work circles instead.” Certainly, Holacracy is not the Holy Grail, and all eyes will be on Zappos to see how it fares with this new model. However, one advantage Holacracy has is that its organizational premise is people-centered and focused on providing clarity around aligning resources and accomplishing tasks.


Human resources matter most when a company decides to restructure. A well-considered design with employees at the forefront will result in an operational transformation that benefits the organization, its employees, and its customers, thereby improving financial performance.


Scott Wise, founder and CEO of Scotty’s Brewhouse restaurants, professes that his employees are the key to his success. They are more important than customers. He states, “If your employees believe in your dreams and values then ultimately they will make your guests happy, too.”


 



 


Carol Huggins is a Manager at Optimity Advisors

 •  0 comments  •  flag
Share on Twitter
Published on November 18, 2014 06:00

November 11, 2014

Beyond The Politics: An In-Depth Look At ACA’s Early Returns

by Taylor Anderson


 


Heading into the second year of the Affordable Care Act’s (ACA) expanded healthcare system, many Americans are still unaware of what it may mean for them. News outlets are quick to highlight rate changes without providing much context around them, as the ACA remains a political flashpoint across partisan lines. Nevertheless, data is available to help shed light on the early effects the ACA has had on the US healthcare market and to help us understand it’s potential future impact.


Historical US Healthcare Costs


Often criticized as a law that will lead to drastic premium rate increases for individuals, the real story behind ACA may have less to do with current premium increases and more to do with tempering the long-term year-over-year increases in healthcare costs, as evidenced by existing trends.  The cost of healthcare in the United States has risen at a pace between 4-7 percent per year since before the ACA took effect. This cost burden for insurers has largely been passed onto consumers via premium increases. Premiums in the three years before ACA took effect (2008 – 2010) rose by an average of 10% or more for individual consumers according to a recent report.


One supporting explanation for rising healthcare costs is the corresponding increase in quality of care.  Unfortunately, in the pre-ACA timeline the quality of healthcare from a consumer standpoint was largely stagnant or in decline. In one study, the United States ranked last among seven nations in 2004, 2007, and 2010 with regard to equity, access, efficiency, and population health. Working to reverse this trend is a key goal of the ACA. With premiums likely to rise, regardless of ACA impact, the change in quality and access to care will be the true indicators for whether the ACA leads to a more cost-effective healthcare system.


While it will take several years to acquire enough data for a comprehensive understanding of ACA impact, the early signs are positive. For the 2014 enrollment period, consumers searching for healthcare plans had 191 issuers from which to choose across the 36 states with federally facilitated marketplaces. For the 2015 enrollment period, this number has risen to 248 (an increase of almost 30%). In addition to this increase in choice for consumers, the Congressional Budget Office estimates that the ACA will reduce the number of people without health insurance by 25 million by 2016. This increase in choice and availability for consumers should help drive cost-competition among insurers in the marketplace.


The Subsidy Effect


As we investigate increases in 2014 and 2015 rates, critics will be quick to look at any increases as a result of the ACA. However, as indicated above, in the three years prior to ACA implementation (2008-2010), premiums rose by an average of 10% per year. This is indicative of the long-term trend of 4-6% annual increases in the cost of healthcare in the United States. The recent trends in rising premiums as well as data on the new ACA tax subsidies, which help to reduce the actual cost for consumers, must be considered in any conversations about rate increases or decreases in the post-ACA health care market.


One flaw in how the media is reporting on premiums is the tendency to report simply on an insurer’s rate request in terms of an increase or decrease in percent. In reality, this figure provides one insurer’s rate trend over a two-year period; it is impossible to compare one insurer’s rate increase of 15% with another’s decrease of 5% without corresponding dollars.  This post provides dollar amounts, where available, as we discuss the impact of federal subsidies on post-ACA marketplace rates.


For the 2014 enrollment year, pre-tax credit premiums for plans on the federally facilitated exchanges were lower than expected, with the weighted average second lowest silver plan (the benchmark plan) being 16 percent below expectations. For a 27-year old this meant $216, a 40-year old $263, and $558 for a 60-year old. When the tax credits are factored in, the numbers are even more favorable for consumers.


Of the more than 8 million people who selected either a state-based or federally facilitated marketplace plan by March 31, 2014 approximately 6.8 million (85%) selected a plan with Federal tax credits. For these individuals who selected plans in the federally facilitated marketplace (FMM), their post-tax credit premiums were 76% less than the full premium. This amounts to a reduction from an average of $346 to $82 per month. For the 69% of individuals selecting a FFM plan, premium post-tax credits were less than $100. For 46% of them, premiums were $50 or less. In some cases, especially with lower cost bronze plans, the tax credit amount may exceed the cost of the plan, resulting in a $0 premium after-tax credit for the enrollee.


Combined with Kaiser Foundation reports indicating that 57% of the 8 million exchange-insured individuals were previously uninsured, this data on tax credits demonstrates positive trends for both affordability and access to care. All of this is in addition to the fact that post-ACA plans are more robust than their predecessors. The provision for essential health benefits ensures that even premiums nominally higher in the post-ACA world will provide more comprehensive benefits for members.


The Consumer Bottom Line


Prior to the ACA, older consumers with pre-existing health conditions were often priced out of the individual market, had their condition(s) excluded from coverage, or were denied altogether. These policyholders often saw premiums rise significantly when they became ill, and there was no federal safety net or support (e.g. subsidies) to help them combat these costs. All of this contributed to higher costs and lower access to care.


The ACA has had a complex impact on consumers, but when placed within historical trends, it provides more access to care, at a similar premium amount. Even at these comparable premium levels, the actual cost for consumers is lower than reported due to the tax credits. With the help of tax credits, the actual cost of care is both more available and affordable than ever for a large number of consumers. The ACA can be better evaluated as time continues but early returns are promising for Americans.



 


Taylor Anderson is an Associate at Optimity Advisors.

 •  0 comments  •  flag
Share on Twitter
Published on November 11, 2014 06:00

November 4, 2014

Moving from “Mobile First” to “Mobile Only”

by Robert Moss


 


The concept of “mobile first” development—that is, developing for mobile device contexts first when creating a user experience and then adding layers of enhancements for larger form-factor devices—has been around a good four years or so now. (The term was coined by web designer Luke Wroblewski way back in 2010.) But, it’s still taking a long time to work its way into people’s minds.


On a regular basis I hear from clients that they have on their list of upcoming projects “our mobile app”—either launching a mobile app for the first time or enhancing the one they already have out there to make it more useful and “drive adoption.” Though they may even cite “mobile first” philosophies, it’s clear that in practice a “mobile app” for their organization is still a sort of adjunct to their core business—something generally being pleaded for by their sales and marketing departments so that they have something new and shiny to brag about or, unfortunately all too frequently, because their main competitor just released a mobile app and they desperately want to say, “we have one, too!”


But, the world is shifting beneath our feet even as we race to catch up with the last iteration of mobile technology. Increasingly, mobile devices are ceasing to be another way our customers can do business with us and are instead becoming the only way that a large number of our customers will do business with us. And, that means that for many users it’s not a question even of “mobile first” but of “mobile only.”


In the first stages of mobile application development, applications tended to be about “accessibility”—giving users a way to access those functions they needed—checking their email, looking up the status of an order, finding the nearest retail location of a store—when they were away from their office or their homes and not able to access their computers. It didn’t have to be as rich and effective an experience, and it didn’t have to be a complete experience. It just had to let them get a few basic things done.


In most companies’ initial forays into mobile-enabling their business, they typically looked to their web site and selected the top 5 or 6 functions that their customers or members needed—with an eye, in particular, for things that would be useful when someone was out and about doing things like shopping, commuting, and eating at restaurants. And, that made sense when you looked at mobile devices as an adjunct, something used when a regular desktop or laptop computer wasn’t available.


But, increasingly mobile devices are becoming the primary way of accessing the Internet by a significant number of Americans. Pew Research Internet Project reports that in 2014, 64% of Americans use their smart phones to go online and, of them, 34% go online mostly using their phone and not via a larger device such as a laptop or desktop computer. Not surprisingly, those numbers skew even higher if you just look at younger Americans. 83% of 18-29 year olds, for instance, have a smartphone vs. 49% for 50-64 year olds.


For businesses, that means a totally new line of thought when it comes to conceiving and designing mobile applications. Can your customers do business with you from initiation through the full breadth of the customer lifecycle? That is, can they find you, shop for and purchase your products, maintain the relationship for customer support and maintenance, and ultimately renew or buy more of your products—all on a mobile phone?


For some innovative businesses, like Uber, the answer is a resounding yes. In fact, despite having been an Uber user for over a year, I didn’t even know whether they had a consumer web site until I checked it out while writing this post. It turns out they do, but you can only update your profile and view trip history, not actually buy their products—that is, request a ride—through the browser site.


Most other businesses, especially ones with older, more established business models, are somewhere on the spectrum from “not mobile enabled at all” to “somewhat fully mobile.” But, many are starting to catch up. Almost all the big retail banks now support check deposits on their mobile apps, which may well do away with the last reason many of their customers have to ever visit a physical bank location. In an interesting digital and bricks & mortar hybrid, the major retail pharmacies like Walgreens and CVS have apps that let their customers print photographs directly from their phone and pick them up at their convenience that the nearest retail location.


That key question—how could customers do business with us solely over their mobile phone?—needs to be at the front of minds when we are formulating our digital commerce strategies and designing our online use experiences. After all, if our company isn’t the one doing it, there’s probably someone else out there who is.


 



 


Robert Moss @RobertFMoss is a Partner and the leader of the Technology Platform practice at Optimity Advisors.

 •  0 comments  •  flag
Share on Twitter
Published on November 04, 2014 06:00

October 28, 2014

The Many Faces Of Taxonomy

by Mindy Carner


 


Taxonomy is the classification of information into groups or classes that share similar characteristics and can be used to organize information in the context of relationships between subjects. However, the word “taxonomy” has become somewhat of a buzzword that may not be truly understood even by many people within an organization that is actually dependent on a strong system of classification.  The ubiquity of taxonomy is what makes it seem buzzwordy, but there are many different types and formats that taxonomy can take. A simple breakdown of the various forms and uses of data classification can help to make them clearly understood and supported by the organization.


The sitemap taxonomy


A sitemap taxonomy can be applied in a basic way to support the navigation of a website by illustrating the information architecture of a website at a high level. The sitemap represents the site’s structure through the top-level navigation (potentially with drop downs), left-hand navigation (possibly multiple levels deep), as well as header and footer navigation. Web search engines use sitemaps to learn the structure of web sites and improve their presence in search results.


The e-commerce taxonomy


Most e-commerce sites are almost entirely driven by taxonomy. When a taxonomy is successful, users don’t even realize that they are involved in a search experience as they navigate through the filters and refinements of the navigation. For example, every page is innately a search result.  Taxonomies that lead to product pages can be very large and complex. In order to coordinate pricing and style with gender and size, or provide the user with dynamic ‘faceted’ navigation, a taxonomy is the architecture through which users travel as they search, or browse, for that perfect item.


The enterprise taxonomy


An enterprise taxonomy is a combination of structures that can be defined, used, and implemented in many ways. Most enterprise taxonomies are used to drive the search engine that combines all major systems into one search index. A search requires structured metadata and taxonomy to help it understand the variety of content that it will index. Significantly, the enterprise taxonomy is much larger than just the intranet; it is all of the internal systems that the company uses for enterprise communication.


Digital and media asset management taxonomy


Taxonomies provide the metadata basis for a DAM (Digital Asset Management) or a MAM (Media Asset Management) that handle images, video and other content types. If these systems are used as the single source of truth for a domain of knowledge, the taxonomy can model that knowledge in a way that is meaningful.  Users will be able to depend on the technology to provide an intuitive and easy experience for delivering content.


What about the overlap?


The advantage of a taxonomy is that it provides a single source of vocabulary across the many systems that an enterprise supports. Many companies have enterprise search, a DAM system, and possibly even an outward facing e-commerce site. If the taxonomy is maintained, it can support search across platforms. Optimity Advisors helps clients by developing a customized, departmental or enterprise-wide taxonomy for navigation or for enterprise search, and then helping them develop a program to sustain the discipline to meet their long-term goals.


 



 


Mindy Carner is a Senior Associate at Optimity Advisors.

 •  0 comments  •  flag
Share on Twitter
Published on October 28, 2014 07:00

October 21, 2014

DAM Experience Starts With User’s Expectations

by John Horodyski


 


The first step in exceeding your customer’s expectations is to know those expectations” — Roy H. Williams


The content-driven experience is ultimately the expression of the relationship between a Digital Asset Management (DAM) system and the user. This dynamic is often overlooked when developing the rules or practices that drive a DAM. The ability to easily deliver or find the right content, at the right time is the goal of DAM and the cornerstone of delivering positive customer experience internally and externally. The perception of experience, however, is affected by expectations. Understanding and integrating those expectations into the design of the DAM infrastructure is key.


If You’re Happy and You Know It …


It’s been said that, “our happiness depends less on objective conditions and more on our own expectations. Expectations, however, tend to adapt to conditions.” We not only hold expectations high but also create them to serve our needs as much as possible. But what if our expectations are out of sync with the conditions of a DAM implementation? For example:



What are the assumptions about the type and completeness of the assets in the DAM? Do users assume that all digital content from the organization will be there?
What keywords are expected by applied to retrieve content?
How much content is expected back from a search? Is it very narrow or do users want to see a variety of results and choose for themselves?

Gauge user expectations at the beginning of the DAM planning stage to match features and functionalities to these expectations.


Experience Touchpoints


What exactly is an “experience”? The Oxford English Dictionary (OED) defines “experience” as:



A practical contact with and observation of facts or events
An event or occurrence which leaves an impression on someone

The practical contact points of the DAM are sometimes referred to as “touchpoints.” The more touchpoints there are, the faster and more effective an organization will be in managing the content it has created, bought, sold or licensed. There are some key touchpoints that need to be addressed and understood by the organization in order to drive the user experience.


   1.   Onboarding


A user’s initial introduction to the DAM can provide a window into their expectations for how they would like the interface with the system. Teaching tagging concurrently with search and discovery will aid in creating top-down, judgment-driven evaluations and bottom-up, data-driven analysis. We recommend pursuing these efforts in parallel whenever possible.


    2.   Metadata


Good metadata design will increase the return on investment of the assets you have created and is a line of defense against lost opportunities. Think about the digital experience for your users in the way that they need to interact with content and assets to ensure easy identification, discovery and positive user experience.


    3.  Taxonomy


Without knowing the context of certain words in your organization, search can fail to provide specificity or weight to search terms. A strong controlled vocabulary will help prevent the dreaded “No results” page by linking synonyms and misspellings to the proper terms. In addition, the taxonomy will link synonyms, possible misspellings and acronyms to the terms that are actually written in the documents. Access to the right content is critical.


    4.    Roles and Permissions


The other side of managing the customer experience is the construction of permissions, users’ roles and security for the DAM. It is never too early to start working on the business rules and practices around access to content. A comprehensive metadata model with fields that are specific to rights usage and management will be critical for users’ ability to use and reuse assets appropriately. Application of security protocols adds a layer of protection and conscientious management.


    5.  Content Stewardship


The power of DAM is driven by people who can make changes and align DAM with strategic goals of the organization. DAM champions might manage daily operations or act as advocates for budget allocation and interface with multiple stakeholders like Digital Asset Managers, Business Analysts and Technical Engineers. Strong governance ties people together to manage change.


Customer Experience Starts from the Beginning 


If we don’t take care of our customers, someone else will.” — Unknown


Identifying which problems to solve will be the greatest starting point on your DAM journey. First step: identify the specific audience and their needs. Take the time to understand usage scenarios — who will be using the DAM and what procedures / output do they need. By gathering feedback, the possibility of damaging workarounds will be minimized.  


Workarounds will still be a problem. When in doubt, people will choose the path of least resistance and bypass the problem, thereby ignoring the issues to be solved. Even worse, while workarounds are often temporary by design, they can easily become entrenched within the culture and ultimately become habitual, if not a convention adopted by a team or department. The goal is to minimize workarounds and maximize your touchpoints.


 



 


John Horodyski @jhorodyski is a Partner within the Media & Entertainment practice at Optimity Advisors, focusing on Digital Asset Management, Metadata and Taxonomy.

 •  0 comments  •  flag
Share on Twitter
Published on October 21, 2014 07:00

October 14, 2014

Pssst! Standard Digital Rights Vocabulary Is Already Here

by Julia Goodwin


 


If your company uses a rights management system, you are translating complex, unstructured contract narrative into a digital rights vocabulary readable by a computer. Your organization is relying on this digital rights vocabulary to  do its sales and financial transactions. So, the nagging question is, why can’t we all agree on a standard terminology across companies?


It’s hard to help feeling the pull of the inevitable.  The market inexorably rewards velocity: easier transactions, open, common language, elegant and simple solutions that remove impediments.  Processes that slow the market tend to be replaced.  Yet, an accepted digital rights language hasn’t garnered adoption to enable the market’s needs.


Frequent claims against a common digital rights vocabulary include:



It’s too complicated to standardize; it’s a non-starter
A negotiating edge is lost if we have an exact and cohesive vocabulary
Common users won’t understand it; it needs a specialized legal background to interpret
You’ll never get agreement across companies or industries on terminology

There is surprising news within your own organization that points to the fact that standardized rights terms are possible; they’re already used…and successfully 



If you have a rights management system, you’ve gone through the difficult exercise of getting many legal professionals to agree on the vocabulary they use to define rights vocabulary such as the names of rights, channels, territories, languages and contract attributes like options or exclusivity or holdbacks.
All that professional agreement now lies in your rights system which the company trusts to runs availability reports, expiration reports, to expedite and reveal all kinds of knowledge about assets and digital rights in your company.
So, you’ve proven that standard rights vocabulary is possible; that negotiating advantages are lost and that common users won’t understand terms become watered down arguments.

Where a company’s standard rights language does break down is in the transactions between itself and its partners and customers—in the very heart of its sales and distribution transactions. In other words, in the most important parts of its business.


If one of the Digital Rights languages such as ODRL, Dublin Core’s RightsExpression or PRISM were to be perfected and adopted, and we all spoke the same language, what would happen?



Offers, Licensing and Sales agreements would be standardized and swiftly comparable and executed.  What is your average contract review-to-execution time and costs today?
Reporting, invoicing and payments required for participations or royalties would be automated and executed timely.  How many days a month does it take the Royalties Department to service these tasks today?  What is that cost?
Rights Management Systems would be networked across shared partner contract points and requests for subscription content, renewals, status of revenues, and execution of options could take minutes.  What do these activities cost now?  How far away is your Rights Management information from the point of sales and distribution?

We could also brainstorm many innovative ways for digital rights metadata to travel with the asset such as turning if off when it expires or watermarking it when copied illegally.  The marriage of asset and digital rights metadata could finally make indisputable and less contentious DRM possible.


In the medical health field, there is a coding system called ICD-10, or the International Statistical Classification of Diseases, 10th edition.  The system allows for doctors, medical coders, billers and insurance payers to automate their processes without ambiguity. Imagine!  With ICD-10 there is:



Agreement among International Physicians
A standard that is in its 10th edition
A common vocabulary for transactions involving multiple parties
Vocabulary and codes that precisely define all complex injuries, diseases, severity, anatomic site, etc., to do with the human body

If we can imagine this, somehow a common digital rights vocabulary seems a lot more do-able.  We know how to do it individually.  Let’s figure out how to do it together.


 



 


Julia Goodwin is a Senior Manager at Optimity Advisors

 •  0 comments  •  flag
Share on Twitter
Published on October 14, 2014 07:00

October 7, 2014

Disruptive Technology And The Publishing Industry: A Look Backward

By Robert Moss


 


These days, you probably have in your home an example of a technological innovation that utterly and violently reshaped the entire business model of the publishing industry. In fact, if you’re like most Americans, you probably have more than one of them. They offered publishers much easier and wider distribution of their products, but in exchange they slashed publishers’ per-unit revenues by some 90% and reduced authors’ earnings to mere pennies. Predictably, it set industry veterans to wailing and gnashing their teeth, declaring that the business was doomed.


No, I’m not talking about smart phone or Kindles or iPads or even laptop computers. I’m talking about paperback books. The market disruption they caused happened 70 years ago.


A Paperback Revolution


The American Paperback Revolution began in 1939 when Robert F. DeGraff founded Pocket Books. DeGraff’s books were small (4-1/4 by 6-1/2 inches), printed on cheap paper, and bound in semi-stiff covers. They sold for twenty-five cents. In order to expand their market, Pocket sold the books not only through bookstores but also in drugstores, newsstands, and railroad stations. Two other companies quickly followed Pocket’s lead. Penguin Books, which had been operating in England since 1935, opened a U.S. branch in 1939. Two years later, magazine publisher Joseph Meyers began Avon Pocket-sized Books. The industry stalled during the Second World War, when paper supplies were strictly rationed, but after the war at least two dozen more firms raced into the market, including New American Library, Bantam, Fawcett, Popular Library, and Dell.


Let’s look at some of the parallels between these two disruptive phenomena in the publishing world: what the rise of paperbacks did to traditional hardback book publishing in the middle part of the 20th century and what eBooks and online journalism are doing to the print publishing world today. (For convenience sake, I’ll roll eBooks and online journalism/magazines into a single category called “ePublishing”)


Lower Costs


Paperbacks: The low cover price of the new paperback format–twenty-five cents for a paperback compared to between two dollars and three dollars for a typical hardback–made books more affordable for the average reader, but they slashed the per-unit revenue for publishers by almost 90%


ePublishing: While traditional publishers are fighting to keep their prices from being slashed a full 90% (which would be pricing an eBook around $2.50 vs. a $25 hardback), they’ve been knocked down quite a lot—and, there are plenty of $2.99 and even $.99 options out there competing against them. It’s even worse for journalism, where ad revenues online are a mere fraction of those of print.


Wider Distribution


Paperbacks: The distribution of paperbacks, which were sold not only in books stores but also in convenient locations such as drugstores and train stations, caused books to be more widely available than ever before


ePublishing: Now, you don’t even have to find a retail establishment. You can download a book from your bed, on a train, or while driving down the road in your car, and you can catch up on the latest news and read your favorite long-form journalism anytime, anywhere, too.


Wailing, Lamentation, Gnashing of Teeth


Paperbacks: Publishers declared it was the end of an era and they were all going broke. Authors echoed the sentiment and, except for a early adopters who embraced the new genre, most “serious authors” rejected paperbacks as both cheapening and an assault on their income.


ePublishing: Ditto.


And the Sky Did Not Fall


Paperbacks: The book publishing industry got along just fine once they factored the economics and distribution requirements of paperback publishing into their business models. Authors learned to how to value and sell their paperback rights, and they became a major chip in their contract negotiations with publishers.


ePublishing: We will see. But it seems likely that the sky will not fall, either.


A Boon to Authors


In the end, what appeared at first to be a major threat to authors’ livelihood turned out to be an economic boon. We’ll use Raymond Chandler, the acclaimed Los Angeles detective novelist and creator of private eye Philip Marlowe, as an example.


At the start of the Paperback Revolution, authors had few options for making money from their books beyond traditional hardback royalties, for the sale of subsidiary rights was not a major factor.


Before paperbacks, the reprint market was limited to a few firms like Grosset & Dunlap, who produced cheap hardback reprints in small print runs–and paid even smaller royalties. In 1940, Raymond Chandler made a whopping $200 from Grosset & Dunlap’s $1 reprint edition of The Big Sleep, his first novel, which they produced from the same printing plates used by Knopf, the original publishers. 


The movies industry was in its heyday, and authors could sell the movie rights to their books, but the returns were small tiny compared to the prices bestsellers fetch today. In 1941 and 1942, Chandler sold the screen rights for his second novel, Farewell, My Lovely to RKO Pictures, and those for his third, The High Window, to Twentieth Century-Fox. His combined take from the both was $2,750–a nice bonus, for sure, in 1942 dollars, but not enough to be the foundation for a career.


Enter the new cheap, widely-distributed paperbacks editions. They increased authors’ potential audience and allowed them to keep their books in print long after they ceased to be available in hardcover. These developments rewrote the terms of professional authorship in the United States, giving previously-struggling novelists a new means of earning income from the new works and, equally or even more important, from their back catalog of things they had finished work on years before.


The paperback industry had gotten started in the late 1930s, but it was largely put on hold by World War II paper shortages. Still, by the beginning of 1945 nearly 750,000 copies of The Big Sleep and Farewell, My Lovely, Chandler’s first two novels to be reprinted in paperback, had been sold. Four years later, over three million copies of Chandler’s works had been published. Despite the penny per copy royalty, with large sales the returns on the reprints were becoming significant.


Chandler had suspended writing novels to pursue income writing screenplays for the studios in Hollywood, a business that paid well but he thoroughly detested. In 1947, he wrote to his agent, “I am a damn fool not to be writing novels. I’m still getting $15,000 a year out of those I did write. If I turned out a really good one in the near future, I’d probably get a lot out of it.”


Chandler could not live on the reprint royalties alone, but they provided a significant portion of his income and helped accelerate his return to novel-writing as a full-time profession in the 1950s.


Electronic publishing likely stands to be a boon for both authors and publishers, too, once they figure out how to navigate the new electronic landscape. We’ll look at the way some of them are doing just that in an upcoming post.


It’s a good reminder of how disruptive technologies often play out in an industry. At first, it seems like the sky is falling, and not every established player adapts their business models correctly to survive the change. Those that are able to adapt, however, often come out on the other side stronger than they were going in.


 


Robert Moss @RobertFMoss is a Partner and the leader of the Technology Platform practice at Optimity Advisors.

 •  0 comments  •  flag
Share on Twitter
Published on October 07, 2014 07:00

September 30, 2014

Records Management – An Invaluable And Holistic Information Tool

by Emily Lanois


 


How efficiently do you and your organization manage records? Does the term information management fill you with trepidation? A records management project that involves a large and diverse group of employees can yield unexpected benefits. It can sneakily change corporate culture and gather valuable insight into how to end inefficient and risky information management practices.


Records management is a discipline that provides great business value to companies that use it correctly. Its business case is most easily built on projected storage and legal discovery cost savings, along with the business value of preserving vital records for the correct length of time. A basic records management project includes creating and socializing a records management policy and records retention schedule. A retention schedule is nothing more than big-bucket records categories with associated retention periods.


In addition to the concrete benefits that justify the resources used for creating a records management policy and retention schedule, there are a variety of unintended benefits to be gained that should influence the project’s structure. Incorporating a multitude of employees in a records management project creates conditions for an organizational change in information management culture, enables the project team to recognize and take steps towards correcting broader information management issues, and provides a way to identify and mitigate information management risks. Although it may be most time-efficient to establish the fundamentals of a records management project with a small committee, the additional time spent working with a broader segment of the organization creates value that makes the extra time well worth it and provides a base for a greater information management overhaul.


Change in Information Management Culture


When an organization has never participated in a formal records management project, its employees probably believe that keeping all information forever is most efficient. For this cultural belief to change, the employees need to participate in the creation of the records management program. If they are to realize the ease and value of records management, they need to be part of the process.


As with many daunting tasks, employees may overestimate the effort required to identify and manage company records until they try. Initial skepticism turns to support when employees understand how records can be managed using big “buckets.” It takes workflow-specific, face-to-face conversation to convey the ease with which records can be distinguished from non-records and then placed in big bucket retention categories. Once records are identified and categorized, employees are empowered—and often excited—to dispose of records that are past their retention dates.


Employees commonly describe the experience of identifying and categorizing their records (for ultimate preservation or disposal) as “cathartic.” During a recent Records Management project led by Optimity, employees expressed relief that they had gained greater control of their department’s records after doing a records inventory. When employees are empowered to preserve and destroy the records that they create, receive and maintain, they add value to the company as information stewards. They also become champions of the records retention schedule and records management policy, increasing its legitimacy in the organization. Companies receive an unexpected benefit from a simple records management project when corporate culture shifts and information management becomes a valued discipline.


Recognizing and Correcting Information Management Issues


Meeting with employees to gather the information needed to create a records management policy and retention schedule inevitably leads to conversations about information management pain points. Most business groups suffer from daily encounters with issues along the lines of:



Inefficient collaboration processes
Lack of repositories for records storage
Incomplete metadata (or identifying information) about records located in offsite storage
Difficulty accessing information that is used regularly, resulting in the creation of convenience copies and proliferation of sensitive information

Although the above issues translate into inefficiencies and frustration, they are very difficult to remediate without allocation of extra resources to implement new processes, technology, and to clean up poorly organized repositories. Additionally, employees rarely feel that it is their responsibility to spearhead such an initiative lying outside of their job description. As a result, employees are often resigned to dealing with information management headaches as part of the job.


Gathering information about the creation, receipt and maintenance of records from a vast sample of employees provides the context for information management problems across business groups. Most employees are only privy to the information management issues that plague their day-to-day work. From the vantage point of a records management project that engages many employees, patterns are identifiable and business cases for enterprise-wide solutions emerge.


Additionally, senior level support behind the records management project can be used to escalate both departmental and enterprise information management issues to higher levels of authority, accelerating company-wide information management change, beyond the scope of records management (i.e. contract management, collaboration, knowledge transfer issues, etc.). The unexpected benefit of progress towards correcting major information management issues is achieved when problems are identified and high-level support is secured during a basic records management project.


Identification and Mitigation of Information Risks


Gathering records management information from a large cross section of an organization is like detective work.  Each question reveals a clue about the information landscape. When asking employees to describe the types of records that they work with, information risks—often unknown to the employee—emerge in conversation. Examples of such risks may include:



Unstructured data repositories not known to management (discovery risk)
Data stored in antiquated technology systems that require great expense to read (financial risk in the event of discovery)
Unsecured PII or PHI stored in company electronic or physical repositories (legal risk)
Vital records stored exclusively on an employees vulnerable hard drive (business, compliance and legal risk)

These types of risk are difficult to identify when doing so is the goal at the outset, while it is easy to gain this unexpected benefit through the natural interview process used to build a records management policy and retention schedule. Employees do not consider their practices to be risky, and do not have ownership of most of the information they work with, and as such are not very responsive to direct inquiries regarding potential information risks. They are even less likely to respond with helpful information when inquiries are directed broadly at their department, and when there is no benefit from offering information. However, when people are simply interviewed about their practices, especially with the benefits of records management laid out beforehand, the details of information workflow and storage emerge and illuminate areas of risk.


The unexpected benefit of awareness about information risks allows an organization to mitigate them, but more specifically directs the project’s records management policy to explicitly forbid and mandate certain practices to prevent the behavior that created risk.


The fundamentals of a records management project lay out the policy and guidelines for how employees should treat company records during their lifecycle, and tells them when and how to discard records when the lifecycle ends. However, a records management project becomes an invaluable and holistic information management tool when large numbers of employees representing a wide cross section of the organization are involved.


 


Emily Lanois is an Associate at Optimity Advisors 

 •  0 comments  •  flag
Share on Twitter
Published on September 30, 2014 05:00

Rod Collins's Blog

Rod Collins
Rod Collins isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Rod Collins's blog with rss.