It’s that time of year again! datapine is following up on our tradition of shifting through and decoding all the noise and hype to come up with the top Business Intelligence (BI) and Analytics buzzwords for the upcoming year. We captured numerous upcoming trends with our 2019 business intelligence buzzwords lists, and are excited to review our top BI buzzwords for 2017. Only time will tell which of these buzzwords are just hype and which will stand the test of time. Let us know which ones we missed and which ones you don’t think will make it!
Chief Data Officer
We forecast businesses are going to increasingly discuss the role of Chief Data Officer. CDOs are responsible for enterprise-wide governance and utilization of information as an asset, via data processing, analysis, data mining, information trading and other means. This role is becoming increasingly significant. It is also a great career path! Gartner predicts 15% of successful CDOs will move to CEO, COO, CMO or other C-level positions by 2020! These CDOs will often be referred to by another Business Intelligence buzzword: Chief Storyteller. Though Chief Storyteller can also fall to other organizational roles.
Nexus of Forces
Nexus of Forces has become a Gartner buzzword lately and will continue to get buzz in 2017. The Nexus of Forces refers to the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios. This convergence has become the platform for digital business. Gartner says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society. Gartner sees the Nexus as the basis of the technology platform of the future.
We threw another related business data buzzword in here: digital business. Not sure what digital business is? Jen Underwood defines it as, “an overarching concept that refers to the blending of physical and virtual worlds. As digital technologies transform, new business models, industries, markets and organizations emerge.”
The third place in our business intelligence buzzwords list goes to data wrangling. It is the process of manually converting from one “raw” form into another format to allow a more convenient consumption of the data. This cleaning work is one of the most time-consuming step in any data analysis.
It follows a set of general steps:
1. Raw data extraction from the data source.
2. Wrangling that data using algorithms.
3. Collecting the result for storage and future use.
Given the exponential growth of data collection with Big Data, such techniques will become more and more important in the organization of all the data available. Data Scientists and IT teams spend an indecent amount of time arranging and adjusting chaotic data before being able to use it or communicate it to the rest of the company’s team thorugh state-of-the-art business dashboards.
Predictive & Prescriptive Analytics
Predictive Analytics: What could happen?
We mentioned predictive analytics in our 2016 business intelligence buzzwords list. We feel predictive analytics is going to be even bigger in 2017, so we included it again.
We’ve also recognized it as the biggest Buiness Intelligence and Analytics Trend of 2018. Predictive analytics is the practice of extracting information from existing data sets in order to forecast future probabilities. Applied to business, it is used to analyze current and historical data in order to better understand customers, products and partners and to identify potential risks and opportunities for a company. Without doubt it’s a big technological advancement, but the extent to which it is believed to be already applied is vastly exaggerated.
The commercial use of predictive analytics is a relatively new thing. The accuracy of the predictions depends on the data used to create the model. For instance, if a model is created based on the factors inherent at one company, it doesn’t necessarily apply at a second company. The same may be true about a model for one year compared to the next year within the same company. Approaches need to take this dynamic nature into mind. Moreover, as most predictive analytics capabilities available today are in their infancy — they have simply not been used for long enough by enough companies on enough sources of data – so the material to build predictive models on is scarce.
Last but not least, there is the human factor again. The psychological patterns behind why people make decisions cannot be boiled down to simple logic and very often are complex and unpredictable.
Prescriptive Analytics: What should we do?
Prescriptive analytics takes the next step but also analyzes and includes action. These analytics use optimization and simulation algorithms to advise on possible outcomes and answer: “What should we do?” This allows users to “prescribe” a number of different possible actions to undertake and guide them towards a solution. Prescriptive analytics attempt to quantify the effect of future decisions in order to advise on possible outcomes before the decisions are actually made. At their best, prescriptive analytics predicts not only what will happen, but also why it will happen. The analytics also provide recommendations regarding actions that will take advantage of the predictions. We are excited to see how prescriptive analytics move forward in 2017.
Informed Data Lake
We are going to be hearing more about data lakes versus data warehouses in 2017. Nick Heudecker, research director at Gartner, defines a data lake: “In broad terms, data lakes are marketed as enterprise wide data management platforms for analyzing disparate sources of data in its native format…The idea is simple: Instead of placing data in a purpose-built data store, you move it into a data lake in its original format. This eliminates the upfront costs of data ingestion, like transformation. Once data is placed into the lake, it’s available for analysis by everyone in the organization.”
The term data lake has actually been around for a couple years and has run into some roadblocks. Hired Brains founder and BI specialist Neil Raden recently proclaimed the future of the data lake is the Informed Data Lake. He states the difference between an Informed Data Lake and the static data in a Data Lake, “is a comprehensive set of capabilities that provides a graph based linked and contextualized information fabric (semantic metadata and linked datasets) where NLP (Natural Language Processing), Sentiment Analysis, Rules Engines, Connectors, Canonical Models for common domains and cognitive tools that can be plugged in to turn “dumb” data into information assets with speed, agility, reuse and value.” Phew, that is a mouthful. The growth of the IoT will help bring about this renewed focus on data lakes. We are excited to see if all this buzz turns in reality!
With an annual increase of 40% to 50% estimated by IDC, digital data has a long life ahead. Such a substantial growth will definitely have an impact on data warehouse, analytics and business intelligence: we do not capture, use and store data the same way as before. The arrival of the IoT will produce another tremendous amount of unstructured data that will require adequate softwares in order to be exploited properly. However, there are different ways of capturing and exploiting this data depending on how it is structured – and if it is structured. Bernard Liautaud, founder of Business Objects, warns against new dangers introduced by this data discovery, that he calls Information Anarchy:
“Information anarchy results from individuals or entire departments taking their informational needs into their own hands. As businesses grew more competitive in past years, departmental managers realized they needed better information to make good business decisions. They realized that whatever information they could procure from the glass house of the IT department would not be adequate.”
Knowing how to create value in extracting the right information from large sets of unstructured data is what will make the difference for businesses in the future.
Shadow IT refers to all the communication and information means used outside the official company’s infrastructure, and without the admins’ approval. From private smartphones to USB sticks, cloud services to private printers, this is a many-sided phenomenon that is growing bigger and can be a real nightmare for companies.
Part of the Shadow IT developed from a good intention at first, like improving one’s productivity when facing ineffective equipment: an employee may take his personal, faster and steadier computer instead of wasting time fixing the crashing company’s one when the 4GB RAM is overloaded. But the main reason of the Shadow IT development is the obsolescence of IT approval processes in most of the companies. They are heavy, complicated processes implemented 25 years ago, still unchanged, that have to be redesigned. Companies should focus on their personnel needs and take the procedures and access granting into account in order to make their staff more efficient, productive, and finally, better satisfied.
6 steps to control shadow IT
- Try to understand it: ask your employees if they resort to this kind of action, how often, and why. This might teach you a lot about the situation of your company, the weaknesses of your IT structure, the scale of the problem, and how to address it.
- Make it a priority and communicate about it: raise awareness among your staff as every employee at every level should know about that important issue.
- Implement a set of rules to follow: from what you have discovered, create some regulations and norms that should be applied and communicate them to newcomers as well as experienced workers.
- Offer alternative methods: when the IT service refuses a request from an employee, it should come with an alternative solution in order to avoid frustration and generate a shadow behavior.
- Speed up the decision process: flexibility and fast-response to problematic issues are super trendy buzzwords too, and they have to be applied in that kind of situation in particular.
- Keep in touch with the various services: implementing a set of rules is not enough to avoid certain behaviours. Staying proactive in asking employees which problem they may face and helping them resolving it will encourage them to ask for your help later.
In spite of all this, and especially if you have a large organization, it is possible that the various services may not come to you when they need help; many of them might prefer to maintain their independence. This is why some companies have developed surveillance software, in order to keep an eye on your IT infrastructure and warns you before a problem turns into a drama.
Augmented Reality (AR) enables us to enrich an image or a video with complementary information (superimposed images, texts) and is a real-time merger of virtual information to the physical reality. AR has already been captured by fashion or manufacturing companies to offer an enhanced customer experience: you may for example try on clothes to choose the right size and colour, or place furniture to see how it fits in your flat. Creating memorable interaction with customers is extremely important for today’s companies, and AR seems to be a good opportunity for that. Now, what could be AR’s role in the more virtual and complex data visualization domain?
Big Data, according to many experts, is one of the biggest challenge of our decade. Even if the machine is able to sort data out, in the end the human eye will have to analyze all the figures. Comprehension of all the information collected is the major issue. What if augmented reality was the solution? What if we were visualizing them, hence exploiting them in a totally different way, thanks to augmented reality? That’s why we but AR in our business intelligence buzzwords list for 2017.
Nevertheless, applying augmented reality to data visualization without thinking of a smart way to display it beforehand would just be a tidal wave of overwhelming numbers and figures. Some business intelligence tools already exist in order to ease and help in making data-driven decisions through smart dashboards for instance. In a near future, 3D data visualization will be a reality, too.
360 Degree View
The in-depth customer analysis can often be referred as a 360-degree customer view. It is highly important when you want to get closer to your customers and understand their purchasing habits, their presence on social media and opinion, so as to give them what they want. Such knowledge is now much easier to collect and access than before, thanks to big data and refined algorithms tracking and monitoring users’ activity from various touchpoints, and providing insights about the difficulties they may face or the value they can take out of the service, what they want to purchase next, etc.
Adopting such a holistic approach will help you improve campaign effectiveness, deliver superior customer experience, drive better engagement, generate more revenue and long-term loyalty: hence the importance of a good understanding of these analytics.
Source: Qubole – 360-Degree View of Customer: Seeing the Big Picture Through the Big Data Lens
Big data is crucial in the perception of customers and improving their experience. More and more businesses now resort to modern analytics platforms combining in real-time various sources of data in one central point to exploit them in a smoother way. Sharing an up-to-date, close-knit and authentic overview of customers is key to avoid data pollution and misleading, duplicated or outdated analytics, and in the long-term to stay competitive and boost marketing ROI. That’s why 360 degree view is one of our business intelligence and analytics buzzwords for 2017.
According to the Oxford dictionary, Metadata is “a set of data that describes and gives information about other data”. In other words, meta data is the structure of data records in a database, it is valuable information about data. There are various types of metadata across an organization (from relational databases to random graphic files or documents) that are essential for them to deliver a good quality of work, but sometimes it is buried by some program code and only the program knows surely what the structure is. Because of that, you cannot share your data and thus your business may be affected. To share your data in an efficient way, you need to manage it it properly. Metadata is here to consolidate and boost the whole data life process and at the same time can provide a helpful tracking thread.
The development of data lakes – a storage solution that can support a large amount of raw, semi or fully structured data – paves the way towards metadata as key in order to manage and prevent the data lake from becoming a “data swamp”. Merrill-Lynch estimated a decade ago that about 80% of a company’s information was unstructured; no recent report confirms that today even though that figure is still widely used and quoted, because it makes sense and we experience it every day. In this way, data management is all the more crucial in the future if the firms do not want to miss out, loose shares or clients because of a poor administration of its own information. Good data management will bring in a greater ROI from IT systems, enhance the interoperability and the sharing of company information and then curtail the risk of data corruption or loss.
Mass personalization is based on the improvement of one or several characteristics of a product or service, in order to provide a better satisfaction to the customer. At the same time, it also enables to bring economies of scale to the company, by eliminating waste in excluding everything that does not bring value to customers. Mass personalization differs from mass customization as it doesn’t co-design with the client; it is the company itself that takes in charge the personalization process based on data collected on its customers.
There are two types of personalization: implicit, where the company creates a profile of the consumer thanks to data collected compared to similar consumers, like Youtube or Amazon recommendations. And the explicit, where the client is directly asked about his preferences, so as to sort out products and services corresponding to his/her expectations.
What does that mean for your business? In an always-changing environment, your customers are accessing an incredible amount of products and services ready to be shaped according to their needs. That is why it is important for marketers to implement data-driven marketing policies that do not limit themselves to the basic segmentation process. Integrating all the data they collect to carry out agile and understandable decision-making adapted to their customers is essential for their brand-experience and their retention.
To do so, business intelligence software have been developed to help out marketers in managing and controlling their various customer channels, preventing them from losing time and energy breaking down internal silos with the IT teams and providing the opportunity to create a more holistic customer views.
Single point of truth (SPOT) – also known as Single Sourceof truth (SSOT) – is the practice of structuring information models and associated schemata such that every data element is stored exactly once.
In practice, that means that every other data location is referring back to the original “source of truth” location. If anyone updates the primary location, such change will be spread to the whole system implemented, avoiding the problem of conflicting copies and duplicate value forgotten.
Today, data analysis software help out in compiling all the information accumulated and thus increase the value of the intelligence of the company by facilitating its management.
2017 is shaping up to be yet another exciting year for the Business Intelligence industry. While we forecast there will be great buzz around re-focusing business intelligence strategy, simplicity, governance and security, there are also a lot of potentially disruptive trends and technologies coming down the pipeline. We look forward to watching these trends and buzzwords. We also look forward to checking our success when we start making our list for 2018!