Publications
Events and activities are substantiated by a stream of publications stemming from the work of the Strategic Business Analytics Chair’s research teams—Professors, PhDs and Students and also in collaboration with external partners in some cases. While some publications are strictly academic in nature, others are accessible to a broader audience and engage the public in the process. External think tanks and the media are involved as appropriate.
Position papers 2024
Artificial intelligence (AI) implies systems that can perform tasks normally requiring human intelligence and skills. Due to the potential of such systems to be applied in various organizational contexts and to a wide variety of problems, combined with the rapid increase of technological developments toward Generative AI, the question of how to manage the responsible development and implementation of AI is more prevalent than ever. In this article, I build on the insights of my co-authored book “Managing AI Wisely” to discuss management practices and present recommendations for responsible AI development and implementation.
Why should managers care about AI?
AI refers to a field in Computer Science that focuses on developing systems that can perform tasks that normally require human intelligence. By using so-called machine learning (ML) algorithms, AI systems can self-learn and generate increasingly accurate predictions, an activity that usually belonged to human ‘knowledge workers’. As data becomes increasingly crucial in organizations and computational power grows, more companies are adopting AI to enhance their business processes. AI applications, ranging from predictive analytics and image recognition to product recommendations and generative AI systems, are now handling tasks traditionally performed by humans. For instance, AI can assist doctors in detecting tumors, enable lawyers to retrieve information in seconds, and act as personal assistants by attending meetings and taking notes autonomously.
ML-based AI systems operate fundamentally differently from previously implemented technologies like expert systems or ERP systems. These differences can be summarized in three key points: (1) to learn, AI systems require vast amounts of data; (2) due to their machine learning capabilities, it is challenging to explain how these systems generate their outputs, making them “black-box” systems; and (3) AI systems often perform tasks typically associated with knowledge workers. These unique aspects of AI present new challenges for managing its development and implementation. This raises questions about what is required of managers to successfully navigate the transition to AI, and the managerial activities necessary for effective and responsible AI implementation. Based on fieldwork at eight large Dutch or European organizations that have integrated AI into their internal processes (see Table 1), I outline the key strategies for successful AI implementation: organizing for data, testing and validating, brokering, and managing change.
Organizing for data
Data are the central building blocks of AI systems. As a result, there is a significant focus on digitizing and “datafying” work and organizational processes. While organizations often pay a lot of attention to the value and potential of “big data”, there is much less knowledge of the types of data needed to ensure AI systems effectively integrate with existing organizational processes. MultiCo (pseudonym), one of the largest fast-moving consumer goods organizations in the world that developed an AI system to aid recruitment decisions, provides a good example of this often-overlooked issue. In this case, the AI system needed to predict not only which candidates would perform well but also which ones would fit within the organizational culture. As such, to make the predictions as accurate as possible in the organization’s context, the ML algorithm had to be trained on data specific to the organization. However, the organization did not yet have the right data available to train the algorithm on this specific task. As a consequence, three hundred existing employees were asked to play online neuroscientific games through which their personality traits could be measured. This data was then used to develop the AI system, which would subsequently make predictions about new candidates.
To make sure that AI systems thus fit well within existing organizational processes, at least part of the data needs to be domain-specific. For organizations, it is important to realize that such data often does not yet ‘exist’ but needs to be carefully gathered and crafted in such a way that there is sufficient data to train the algorithm with. When organizing for data, it is therefore crucial to consider which actions are necessary to create new data, who should perform these actions, and whether this can be done internally or whether an external party is required. This helps to facilitate new forms of collaboration between managers and data experts, through which insights can be gained into how data-related decisions can influence the further development of AI systems and address ethical issues early in the process.
Testing and validating
After the initial development phase, questions arise regarding whether and how AI systems will function in practice. Typically, these questions are addressed by data scientists working on the development of the AI tool and involve technical inquiries regarding the measurability of outcomes and which methods yield the best results (called ‘testing’), as well as whether the outcomes are mathematically explainable (termed ‘validating’). Alongside these technical concerns, there is typically little managerial attention for integrating the organizational context into the testing and validating of AI systems.
The case of ‘predictive policing’ at the Dutch police offers a good illustration of how the organizational context can be involved in testing and validating AI systems, since here management was interested in finding out whether the algorithmic outputs fit within the existing police work processes. For example, one of the key requirements during the testing phase at various police stations was that users experienced the system as user-friendly, that they could easily generate the predictions, and that these could be integrated into decisions about police deployment. In this case, testing and validating was thus not so much about checking whether the predictions were accurate, but rather about determining whether the outputs could be embedded within existing police operations.
It is thus essential to consider testing and validation as an implementation activity with organizational consequences instead of merely a technical decision residing in the realm of AI developers. However, this is not without its challenges, for it raises complex questions, such as who decides whether AI systems are ‘good enough’. It also becomes more challenging to determine who is responsible for the outcomes generated by the algorithm; developers, managers, or users? Moreover, testing and validating in relation to existing organizational processes also requires managers to take into account internal and external guidelines, laws, and regulations.
Brokering
In the responsible implementation of AI systems in existing organizational processes, one of the key challenges faced is the difference between the mathematical reasoning embedded in AI systems and the human reasoning and domain knowledge of the intended users. For instance, an AI system can calculate the probability of a convicted criminal reoffending. However, the mere ability to generate these predictions does not mean that judges (as intended users) will abandon their expertise and blindly trust such predictions. Therefore, there is currently a lot of discussion (both academically and in practice) about the necessity of interpreting, translating, or ‘aligning’ the outcomes of AI systems for users. A particularly interesting consequence of this need is the emergence of new “brokering” or “bridging” roles, which can translate, interpret, or explain algorithmic outcomes and thereby ‘sell’ the predictions to users.
ABN AMRO, a large Dutch bank where an AI system was implemented to detect money laundering activities, offers an excellent example of how such a brokering role can be implemented. Here, a group of senior analysts was tasked with making the predictions usable for their colleagues. In their brokering role, the senior analysists were trained to unpack the algorithmic predictions and talked with the developers to gain insights into, for example, the main variables used for generating a specific prediction. This way, the senior analysists were increasingly able to explain to their colleagues what the reasons might be for a transaction being flagged as a potential money laundering activity, which heavily influenced the overall willingness to use the tool.
When organizing for brokering, it is important to consider the potential influence of this role and various managerial decisions will have to be made. For example, to what extent should a broker be limited in how the algorithmic outputs can be translated and interpreted? How intensively should the contact be between the brokers and the end users? How can feedback loops be organized between the brokers and the AI developers? And should the brokering role be temporary or permanent. The answers to such questions are largely context specific and therefore require managerial involvement and understanding of the importance and consequential nature of this role.
Managing change
Finally, one of the most pressing question about the implementation of AI is what will happen to existing work. There is an overall assumption that the introduction of AI will lead to largescale job loss. However, more and more researchers are now opposing the idea that the years of experience and knowledge that professionals have gained – both during their training and throughout their professional career – can or will be taken over by AI systems. Currently, we do not see many jobs completely disappear, but what we do see is that AI systems have an impact on how work is performed, often in unexpected ways. Paying attention to how work is changing through the implementation and use of AI systems brings unexpected consequences to the fore. An example of this is the implementation of an AI-driven chatbot at the insurance helpdesk of Centraal Beheer, one of the largest insurance companies in the Netherlands. Because the chatbot could take over the ‘simple’ questions from the helpdesk employees, the more complex and emotionally intensive questions remained for the human helpdesk employees, which required different skills than before to deal with the ongoing more psychologically-intense requests. We can also see changes in authority and work responsibilities. For instance, a police officer now has less of a say in where surveillance will take place because the AI-based crime predictions are perceived as more objective and comprehensive. Finally, we can see many new roles and functions emerge – roles that are needed to develop, implement, and govern AI, such as the brokering role discussed above, but also data engineers and AI auditors.
It is, therefore, important for managers to critically yet proactively address current claims about the impact of AI on existing work. To manage change, it is essential to anticipate direct as well as indirect changes in work. Additionally, especially due to the limited knowledge about how work is going to change in the upcoming years, it is important for organizations to continue to learn and improve work requirements in line with the ever-faster technological developments.
Managing AI WISEly
AI systems, as already mentioned, involve a fundamentally different technology than the information technologies that many organizations have implemented so far. As described above, this radically different technology requires new management activities. In these management activities, four principles recur that are crucial for the successful and responsible development and implementation of AI in organizations. These principles can be formulated as four recommendations for managing AI in practices, which can be brought together in the acronym ‘WISE’:
Work-related insights. The AI system must be based on locally acquired insights, both in terms of data, testing and validation, the brokering role, and the work processes that ultimately need to change. For managers, this means that although the possibilities of AI may seem all-encompassing, they must carefully navigate the ‘AI-hype’ and keep work practices central during the development, implementation, and use of AI systems.
Interdisciplinary knowledge. Different domains (such as developers, users, and legislators) need to be brought together, and additional training should be provided where necessary. As a manager, one must carefully consider who needs to have what knowledge and how this knowledge can be made available. Naturally, when assembling a team responsible for the development, implementation, and use of AI, a manager must ensure that the different disciplines and stakeholders are represented.
Sociotechnical changes. The implementation of an AI system should be seen as an organizational change process. Conversely, the system must also be adapted to the needs of the work processes. AI is not something that ‘happens’ to a manager. Nor is it a ‘force’ that is simply unleashed on an organization. Management has the task of actively guiding what the system does and does not do. A wise manager considers both the characteristics of the organization and the technological properties when making decisions about the implementation of AI.
Ethical awareness. Discussions need to be held about the ethical considerations and the explainability of both the AI system and its underlying assumptions, as well as the consequences that certain decisions can have on the further development of the AI system.
Consider, for example, the data that is made available and used and the consequences this can have for employees or customers. This requires an active and critical attitude towards the choices being made throughout the entire process – from development to use.
Final remarks
It should by now be obvious that managers, as important organizational decision-makers, have a significant role and responsibility in responsibly developing and implementing AI systems. Of course, not everything can be done at once and I also do not imply that such responsibilities can or should be carried out by a single manager. On the contrary, wise managers create a wise team around them to help look beyond the AI hype and to answer the question: Are we managing AI wisely?
How can Marketing and Artificial Intelligence address the ecological crisis ?
Please find the article on this link.
Marketing is often criticized as a contributor to climate change and ecological damage. This stems from the perception of marketing as mere communication that drives consumers to buy unnecessary products, leading to overproduction and environmental harm. The marketplace is sometimes portrayed as a system where people are manipulated into desiring low-value goods they would not otherwise want. While there is some truth to this, it is a limited view that hinders our ability to envision positive change through marketing.
Marketing is not just about persuasion; it is about understanding and adapting to human needs. This understanding is crucial. Companies must respond effectively to these needs to build lasting brands, striving for greater efficiency and effectiveness than their competitors. The essence of marketing lies in creating value for consumers by meeting their needs. As marketing becomes more adept at meeting underlying needs, the physical aspect of consumption diminishes. Andrew McAfee's book, "More for Less," provides numerous examples, such as how marketing research helps companies identify and eliminate unnecessary packaging in e-commerce. Online shoppers prioritize the product's contents over the packaging, rendering flashy displays unnecessary. This understanding of consumer needs is a key aspect of marketing that we must all be more aware of.
By understanding that successful marketing is about crafting customer experiences that maximize benefits, we can rethink its role in addressing growing environmental concerns. Effective marketing begins with consumer insights. AI, with its potential to enhance this process, offers a promising future. It can lead to fewer failed product launches and reduced economic and environmental costs. Artificial Intelligence (AI) can process vast amounts of unstructured data to identify patterns and underserved needs. Therefore, managers can embrace an AI customer-centered marketing approach to offer more personalized products to reduce overproduction and unsold goods instead of promoting products that will remain at the back of the cupboard.
Empowering Change: Understanding Marketing's Role in the Ecological Crisis
Bernard Arnault (LVMH Chairman and CEO), in his 2016 speech at Oxford Union, emphasized that the marketing approach is about understanding what consumers want and crafting an offer that fulfills their needs. He even went as far as to say that LVMH does not 'do' marketing, as they focus on the offer: the products. Luxury products meet some consumer needs. Luxury companies like LVMH or Hermès are very successful because they excel at offering products with a high psychological value that satisfies needs such as the need to belong to a social group or the need to signal higher status. Nevertheless, embracing an 'offer-centered' approach might maximize companies’ profits but also maximize environmental harm. Indeed, the latter business strategy can offer products consumers do not need but compels consumers to buy them thanks to communication and promotional techniques.
Companies following an offer-centered approach need to pay more attention to the backbone of marketing, which is marketing research. These companies do not value marketing research because they do not trust it (e.g., some managers question the validity of marketing studies), do not have resources (i.e., marketing research can be expensive and requires experts), or see marketing research as a limit to creativity. For instance, most winemakers, perfumers, or fashion designers make their products without sound marketing research. Marketing research refers to the process companies use to collect and analyze information about their market, information used to identify and define opportunities. Marketing research supports strategic decisions about specific customer groups to serve, the customer wants to address, and the best way to create customer value. The following marketing steps (i.e., marketing planning and implementation) are based on marketing research and strategy. Therefore, marketing research is the pillar of any customer-centered marketing process. Many managers need to pay more attention to this first marketing step rather than focusing on the marketing mix (price, product, place, AKA distribution, promotion, AKA communication). Notably, many managers use marketing to refer to communication, retailing, and price promotion strategies, ignoring marketing research. Big Data and AI are revolutionizing marketing research, addressing many prior limitations. The following section will develop how, but before, we would like to highlight the effects of an offer-centered approach on waste and discuss how a better understanding of consumer needs can decrease waste.
Waste is generated by unreleased products (e.g., when companies develop new products and test them to find that they do not create value for consumers) and unsold products that expire in warehouses or store shelves. For instance, the European Environment Agency (2024) estimates that between 264,000 and 594,000 tonnes of textiles are destroyed before use yearly. Food and beauty industries are also commonly criticized for waste. Zero Waste Europe (2020) estimates that 20% of the food produced is wasted in Europe. Waste figures are not disclosed in the beauty industry, but experts estimate that between 20% and 40% of beauty products, depending on the category, end up as waste (Vogue Business, 2021). Unreleased and unsold products are caused by inadequate or missing marketing research since companies offer products consumers do not want. A deeper understanding of customer needs identifies new product opportunities, improves the design of new products, helps manage product portfolios, and improves existing products (Timoshenko & Hauser, 2019). Therefore, a customer-centered marketing strategy, when fully embraced, can inspire change. It increases the likelihood of making and offering products that meet consumers' needs and wants, limiting the waste of resources. Some fashion and beauty companies have adopted a more customer-centered marketing approach to decrease waste. Companies like Prose and Pure Culture Beauty for skincare and haircare products and Asphalte and Fashable for clothing have embraced the marketing approach by making more personalized products based on customers' data. These companies have demonstrated that a customer-centered marketing approach can reduce waste and contribute to the ecological transition.
Waste is also due to products bought but barely used because customers do not need them. Consumers are responsible first since they are free to purchase or not. Nevertheless, companies like the fast fashion industry should not encourage the latter behavior and should figure out how to encourage eco-friendly consumer behavior thanks to marketing research. For instance, research shows that consumers often purchase the enjoyable experience of shopping itself, sometimes called "retail therapy." Thanks in part to marketing, this is becoming less prevalent and destructive. Companies are finding more efficient and effective ways to meet people's needs for escapism and immersive experiences, such as through digital platforms like social media and games. These digital experiences can be tailored to individuals, offering a more fulfilling escape with less material waste. Marketing research must help companies to respond effectively to other needs in a more ecological manner. For instance, luxury companies may enhance offline and online experiences over ownership to satisfy the need to belong to a social group or signal higher status, allowing products to be used multiple times. Marketing communication, which is often negatively characterized as the main culprit of how businesses manipulate consumers and contribute to ecological destruction, can effectively encourage sustainable consumption by tapping into feelings of hope, pride, and guilt by offering concrete information about what can be lost or local impacts, or by leveraging social influence (White et al., 2019).
The second part of the present paper focuses on how AI in marketing research can help companies offer fewer products that are more personalized and closer to what customers need and want to decrease waste.
Envisioning the Future: Unleashing the Power of AI To Predict and Meet Consumers’ Needs
Marketing requires a deep understanding of customer needs because the latter helps segment marketing, identify strategic dimensions for differentiation, and make efficient marketing mix decisions (Timosenko & Hauser, 2019). Companies have used interviews, focus groups, surveys, and conjoint analysis to identify customer needs. In every case, consumers are aware that they are part of a research effort, which can trigger demand effects and biases. To address this limitation, managers increasingly use user-generated content (UGC) (e.g., online data from search engines, social networks, or product reviews) to identify and predict customer needs because it is more extensive, updated continuously, available quickly, and at a low incremental cost to the firm. Relevant UGC to capture trends (i.e., new consumer wants) includes reviews, complaints, new uses of existing products, or increasing mention of product attributes.
Furthermore, machine-learning methods facilitate the analysis of large UGC corpora. Machine learning and natural language processing (NLP) have been applied because they are well-suited to quantify information from unstructured data to gauge consumer emotions and opinions. For instance, Timosenko and Hause (2019) apply a convolutional neural network on customer reviews to filter out noninformative content and cluster dense sentence embeddings to avoid sampling repetitive content. They show that their AI approach improves the efficiency of identifying customer needs from UGC, which is at least as valuable a source of customer needs for product development as traditional methods.
The rapid development of large language models (LLMs) promises to understand and emulate a broad spectrum of human behaviors and preferences. Built on the Transformer architecture, these models focus on predicting subsequent words, enabling LLMs to generate plausible text based on extensive training data. Due to LLMs' capacity to mimic humans across various contexts and their inherent stochastic nature, marketing researchers have explored their use in producing survey samples (Brand et al., 2024)[6]. Furthermore, companies have used LLMs to understand a market better and guide new product development. For instance, an Asian beverage company used LLMs to predict which flavors to launch in Europe by feeding the model with customer information. However, recent research (Goli & Singh, 2024)[7] shows that using LLMs to predict consumer preferences can be misleading. Still, they can be valuable for identifying potential factors or mediators that explain preference heterogeneity across different contexts. Future models will improve accuracy in capturing consumer preference heterogeneity, but it remains a distant prospect (Goli & Singh, 2024). Companies have recently used generative AI to interact with intelligent avatars that stand for personas. These AI entities are based on customer data and allow managers to interact with them. These virtual interactive personas can be queried about their personal preferences and consumption habits. So far, AI can mainly mimic current attitudes and behavior, but companies are more interested in predicting trends (i.e., what consumers will want).
More research is needed to develop AI tools to spot new trends. It will require extensive training data, including CRM (customer relationship management) data, product reviews, online forums, social networks, and public reports. Future models must identify which sources capture earlier and more accurately new trends and use these sources to inform managers about what products to launch next year. For instance, L'Oréal uses what influencers say on Instagram to detect new "hot" topics that will become trends the following year. Does UGC from influencers or consumers predict new consumers' wants more effectively? Which influencers? Which consumers? The answers to these questions depend on each market and segment and will change over time. AI can help managers know which UGC sources to monitor and identify incoming trends as early as possible based on the growing popularity of words among these UGC sources. AI can detect these new popular words automatically and identify the users who set new trends based on many variables such as earliness and prediction power.
Furthermore, future research must examine which UGC activity best predicts future trends per segment and context. For instance, UGC can relate to nonconsumption, dissatisfaction with current offers, and creative uses of existing products. AI will access more data and become more sophisticated, increasing its ability to absorb and infer rich aspects of consumer wants. The ultimate marketing tool will be a virtual interactive persona, which stands for a market segment, to test new product ideas before making prototypes.
Final remarks
AI can leverage various data sources (e.g., purchase history, browsing behavior, and social media activity) to forecast future purchasing trends. This allows companies to anticipate consumer needs and go to market effectively. This proactive approach allows businesses to ensure that the right products are available at the right time, reducing waste. Ultimately, the ability to predict consumer needs fosters a deeper understanding of market trends, enabling businesses to stay ahead of the competition and continuously innovate their products and services. Integrating AI into marketing strategies will be crucial in achieving sustainable growth and staying ahead of industry trends. To achieve accurate predictions, companies must have the correct data and capabilities. This requires investment in data gathering, data processing, and analytical skills. Lack of data or wrong assumptions can lead to misinterpretation of results, resulting in ineffective targeting strategies. Therefore, differentiation and success will be based on access to quality data, AI predictive modeling power, and the effectiveness of marketing planning and implementation (e.g., companies will still need good salespeople to explain and persuade as long as consumers prefer humans to virtual agents). If more companies embrace an AI customer-centered marketing approach, the number of new products would decrease, and companies would focus on a limited number of innovations that bring more value to consumers. This approach is how marketing and AI can contribute to resolving the ecological crisis.
Please find the link to the position paper here.
AI as perceived by ESSEC students: A response to contaporary Issues?
by Thomas L. Huber , 16.11.23, with Jeroen Rombouts
On February 6th, 2023, Google announced the deployment of BARD, a rival model to ChatGPT, in response to the collaboration between OpenAI and Microsoft to integrate GPT-4 into Bing. This competition between the tech giants in the AI space is fuelling an innovation race to redefine web search, seeking to provide a more natural and intuitive experience for users.
Such commercial rivalries are but a glimpse into the myriad ways AI is poised to reshape our world. Therefore, it is crucial to delve deeper and critically assess the profound impact that Artificial Intelligence is set to have on both our professional and personal spheres. This involves understanding the mechanics of AI-powered technologies and the vast expanse of their potential applications. With Machine Learning methods being woven more deeply into strategic decision-making and value creation processes across organizations, we must gauge their economic, ethical, social, ecological, and political repercussions. Consequently, it's imperative for future managers to acquire proficiency in these tools.
Thomas Huber and Jeroen Rombouts, professors in the Information Systems, Decision Sciences and Statistics Department, explored ESSEC students’ perceptions of AI. Over 1000 students in a course on Introduction to AI for Business answered a survey looking at their attitudes towards AI, including its potential and its risks.
Do they perceive AI as a threat, or rather as a solution to current challenges and crises? A mix of both: they tend to think that AI can address society’s grand challenges, but that it bears risks.
AI: a response to contemporary challenges.
Artificial intelligence is advancing at a rapid pace, finding applications in an expanding range of domains and consequently reshaping the landscape of value creation. For example, the progress made in fields, such as language processing for translation or text summarization, underscores this transformative impact. ESSEC students are optimistic about these future impacts. The average response to the question "On a scale from 1 (low) to 20 (high): how do you see the benefits related to AI?" is 16/20, highlighting the enthusiasm students have for AI.
This enthusiasm for the future of AI-powered solutions is also underscored by a sentiment analysis. With many students displaying anticipation, optimism, and joy, it seems that artificial intelligence is primarily perceived as a tool whose multiple use cases can positively impact our way of living and consuming.
ESSEC students envision a myriad of managerial and business applications for emerging AI technologies. Some of the standout themes from their responses encompass:
· AI-driven automation of tasks and support in managerial decision-making.
· Broader adoption of text and face recognition technologies.
· Personal AI assistants powered by artificial intelligence
· Recommendation algorithms on social networks and e-commerce platforms.
ESSEC students believe that such AI-powered technologies hold the potential to create significant value, especially in the health sector, with 40% pinpointing this as AI's paramount application. Furthermore, they recognize AI's potential role in combating climate change, evident from an average score of 14/20 when asked about its significance in this arena. Finally, students anticipate innovative AI tools to become integral to their academic experience. On the one hand, they anticipate that AI will be used to improve pedagogy in business schools, e.g., by supporting adaptive learning approaches. On the other, they express a desire to understand the business implications of AI through multidisciplinary approaches and tangible use cases.
Inherent risks to the development of AI
Although AI-powered technologies could address some of our contemporary problems, ESSEC students identify several risks inherent to their development. These concern ethics, decision-making, and the impact of artificial intelligence on work, which directly impacts our personal and professional lives. Students are concerned about job loss, particularly when it comes to biases in data collection and processing, and about the loss of control we experience by replacing human decision-making by AI tools.
The other central strand of discussion about AI's dangers is data security and privacy breach, with 25% of students believing this is the most severe threat posed by AI. These concerns are not unfounded.
Students therefore see a need to take back control over AI technologies and for greater transparency of AI applications. To address this, one proposed strategy involves pinpointing specific use cases that entail privacy infringements, thus paving the way for tailored legislative measures proportionate to the associated risk. Finally, the students were also asked about the obstacles to developing these technologies. Some of these barriers are technical challenges, such as acquiring quality data or increasing the computing power needed to run AI-powered technologies. Others are related to users and institutional perceptions of these innovations. About half of the students believe that the main barrier to the development of AI is individuals' fear of losing control of these new technologies and their general lack of confidence in these innovations. While many view regulation as a safeguard to prevent AI from spiralling out of control, students also express concerns that such regulation could, in itself, pose an obstacle to AI's potential. Therefore, the major challenge facing AI regulators will probably lie in their ability to react quickly to the emergence of new technologies while avoiding slowing down research that could reveal very promising uses in fields such as health and climate change, and in other untapped areas.
Conclusion
On the whole, ESSEC students are optimistic that AI can be a solution to contemporary challenges. They recognize the potential benefits of AI, particularly in the areas of healthcare and climate change. However, they also acknowledge the associated risks, such as the substitution of human decision-making, biases in data processing, and concerns regarding data security and privacy breaches. The students' point of view on AI for business applications encompass a wide range, including automation and enhanced recommendation algorithms. To prepare for the AI-driven future, education must adapt by incorporating AI knowledge and skills through a multidisciplinary approach and practical use cases. By embracing AI responsibly, businesses and individuals can navigate the complex AI landscape and contribute to positive societal impact.
Tuning in - What AI and user generated content can tell us about consumers
by Raoul Kübler , 27.07.23, with Jeroen Rombouts
Please find the article on this link.
Please find this article on this link
CSR policies in East Asia and Europe: quantifying influences and differences
by Mathilde Bernard, 06.03.22, supervised by Jeroen Rombouts
Why is that North-East Asia and Europe, two regions with mainly developed countries, perform differently on corporate and social responsibility (CSR) topics?
Our cross-cultural literature and data analysis brings us the following three insights:
First, North-East Asian companies score significantly lower than their European counterparts. This gap is particularly visible in Environmental and Social scores. However these are the two categories where North-East Asia is also catching up rapidly. We predict that North-East Asia will start catching up on Europe within five years.
Second, North-East Asian and European companies don’t share a common philosophy of corporate and social culture. While Asia is more driven by duty and results, European companies tend to be oriented towards personal achievements. In addition, CSR is a subject that is strongly politically guided.
Third, a major problem lies in the frameworks currently used to analyze and value CSR performance, as they are built on a Western value set and perspectives. This prevents North-East Asian companies to be graded on features that are relevant for their activities and societies.
To conclude, there is a need for a globalized framework that allows companies to be compared on an international level on their CSR performance.
Read the full study here.
Please find the link to the white paper here.
Pleas find the link to the white paper here.
For a downloadable document, click here.
Over the last years, many organizations have been investing substantially in data and analytics. The objective is to become more data-driven and become a tech-style organization. Companies willing to go further than just symbolically profiling the organization invest in AI to go from descriptive analytics to predictive and prescriptive analytics. This requires a solid data and AI governance program, IT infrastructure that makes all data readily available in a so-called data lake, and piloting of the organization through a carefully selected key performance indicator portfolio. It has been widely documented, however, that the most important hurdle is the change to a culture that embraces agility and experimentation. In fact, it is the humans that need reskilling. As a consequence, training programs have been launched and large organizations can now boast about their hundreds of use cases created by interdisciplinary teams which are shared on an internal repository for further development and innovation. The hard question comes next: what is the return on these huge investments? Why are so little AI use cases in production and where is the generation of tangible value? There seems to be a gap that needs to be filled and MLOps are bringing part of the answer.
Before going into MLOps, let us take one step back. It has always been a brain teaser for the software development community to find the best methodology for project management. It started with the waterfall approach, introduced in the 70s by Winston Royce. This linear approach defines several steps in the software development lifecycle: requirements, analysis, design, coding, testing, and delivery. Each stage must be finished before starting the next and the clients only see the results at the end of the project. This methodology creates a “tunnel of development” between gathering the client requirements and the delivery of the project. For many years, this linear approach has been the cause of tremendous loss in resources. An error in the design stage or the clients changing their mind required rebooting the development process. Furthermore, engineering teams were clustered in different stages (developers for coding, QA teams for testing and Sysadmin for delivering) which created friction and a fertile ground for communication errors. This is one of the reasons which led to a new methodology which started around 2001: the agile approach.
Agile principles have infused the software engineering culture for more than 20 years. It has endowed companies with the ability to adapt to new information rather than following an immutable plan. In a fast-changing business environment, it is more a question of survival than a simple change of methodology. Now, companies put customer involvement and iteration at the heart of the software development process. They bring together engineers with complementary skills within teams coordinated by product managers to regularly release pieces of software, gather feedback and adapt the roadmap accordingly. This was a true revolution, but it was not perfect: there was still a gap between software development and what happens after the software is released, also known as operations. In 2008, Patrick Debois and Andrew Clay filled this gap with the DevOps (contraction of development and operations) methodology. By bringing all teams (software developers, QA and Sysadmin) together in the development and the operations processes, waiting times are reduced and everyone can work more closely, in order to develop better solutions.
Back to today, what can bring DevOps today in the era of artificial intelligence? The needs are the same: companies are looking for methodology to develop and scale AI algorithms to generate value and reap the benefits of their investments. Data leaders recently began to investigate the benefits of the Devops methodology. However, machine learning and AI algorithms have a peculiarity that drastically differentiate from traditional software: the data.
Data is everywhere and has become a tremendous source of value for companies. The recent advances in fundamental research and the democratization of machine learning through open-source solutions has made artificial intelligence accessible for all. Data scientists are one of the most sought-after profiles in the current job market as they promise to be the key factor in unlocking the value of data. But in the same way that software developers needed Devops methodology to maximize their productivity and scale software development in controlled and secured environments, data scientists need a framework to develop and scale AI-powered solutions. Since those solutions are different from traditional software, they need to be managed accordingly. Therefore it is essential to use Devops practices, but data leaders also need to acknowledge the singularity of using data within software that makes decisions autonomously. This is where Machine Learning Operationalization (MLOps) comes to rescue.
MLOps is a set of practices, bringing Devops, machine learning and data engineering together to deploy and maintain ML systems in production. This is the missing piece which allows organizations to release the value contained in data using artificial intelligence. With formalization and standardization of processes, MLOps fosters experimentation but also guarantees rapid delivery, to scale machine learning solutions beyond their use case status. Once the solutions are in production and consume new data, monitoring predictive performance is key. Universal outperforming ML solutions for specific solutions simply don’t exist, hence organizations need monitoring predictive performance in real time. MLOps helps monitor this performance and acts in case deterioration due to concept drift occurs. The automation of the collection of lifecycle information of algorithms, that is tracking what has been recalibrated by whom and why, allows improving the learning process and reporting to auditors if required. Hence, accountability and compliance issues can be addressed.
While most data training programs focus on the elements of machine learning, statistics and coding, and work on use cases in a sandbox environment, MLOps principles are not yet covered extensively. Furthermore, business leaders invest in AI without fully understanding how to create an efficient development and operations environment for their data teams. Filling the gap between data and operations is not straightforward. The complexity of ML algorithms, often considered as a black box run by data scientists who are supposedly the only ones in the company to understand what they are doing, separates others from the development process and creates another gap between AI and business.
MLOps does not only concern engineers: every stakeholder of data-based solutions should be involved. The revolution of artificial intelligence is undoubtedly happening now, and all those who intend to be part of it will have a role in creating and running MLOps processes in their organization. Future data leaders should acquire basic MLOps skills in their training programs to remove the harmful and unnecessary boundary between business leaders and engineering teams around data-related topics.
We live in a world of experience. As people are increasingly always on - always connected, they are looking for experiences that are relevant and personalized for them, in the moment. Whether as a consumer, citizen or worker, people expect experiences made for them and delivered instantly. Artificial intelligence enables and accelerates every organization’s ability to shape this new dynamic.
Let’s take a couple of examples. A cruise company, Carnival, is using AI to offer its guests a uniquely personalized experience from before they even board a ship to the moment they disembark. AI acts as a brain that anticipates what guests want and need and then coordinates all Carnival’s people and resources on-board to deliver uniquely personalized experiences for everyone. Or look at the Albert Einstein Hospital in Sao Paulo. Here, AI is managing patient flow from initial consultation through to admission and treatment. The result? Transformational levels of efficiency and improved care.
Transforming experiences everywhere
AI has the potential to transform the end-to-end processes of any organization. It can offer more accurate demand prediction, automate the supply chain and deliver more efficient and personalized customer service. By harvesting and analyzing ever-greater volumes of diverse data from a growing range of sources, AI is completely changing the experience of users, customers, employees and the wider society.
So what do we mean by a great experience? It has a number of key dimensions: personal, trusted, natural, intuitive, predictive, focused, immersive and even beautiful. AI enables these qualities through its ability to deliver personalization at scale. It can tune into and predict individuals’ intentions, preferences and behavior. That enables natural and intuitive experiences, optimized to an individual’s specific context and needs.
AI = outperformance
Organizations exploiting AI’s potential outperform their competitors. Accenture’s France Future Systems Research report shows that the top 10% of companies surveyed were more likely than the bottom 25% of performers to have adopted AI early and to have developed expertise in AI.
Accenture research shows that companies that have effectively embraced AI achieved nearly triple the return from AI investments than companies that have yet to embrace the equivalent technology. Organizations that are deploying AI at scale are seeing transformational change across their business, from demand prediction, to automated supply chains and superior customer service.
A new age of customer experience
AI’s impact is already clear for today’s consumers, who expect experiences that are “always on, always me”: personalized, instantaneous and available at all times.
Around the world, AI is helping businesses meet those demands. How? By constantly drawing on and analyzing data from millions of interactions. The resulting insights enable organizations to adapt around the evolving needs of their customers and offer them relevant experiences, in the moment. Avianca Airlines, for example, has developed a chatbot to reduce their response time to customers. Spotify uses AI to tailor music recommendations according to a user’s listening history. McDonald’s and KFC are developing the use of AI to predict orders based on a customer’s previous purchase habits.
And AI doesn’t just improve existing customer services: it also creates entirely new ones. In the beauty industry for example, Shisheido uses AI to provide its customers with personalized skincare recommendations – all based on a selfie. The consumer uploads a picture to the company’s Optune app and AI does the rest. It examines the picture and combines that analysis with data about the external environment and the individual’s health and mood to create a uniquely personalized experience. L’Oréal employs AI and augmented reality in virtual try-on services to push makeup and hair color products, using technology from its recently acquired company ModiFace.
A revolution in public services
Businesses are at the forefront of AI development and adoption. But governments are also recognizing AI’s potential to transform the experiences they create and deliver. The US Department of Defense, for example, uses AI to help plan deployments during crises, while NASA employs bots to aid in its finance and procurement processes.
In the near future, AI will improve an even wider range of public services and change how people live. It will help public healthcare practitioners to predict illness and create personalized and preventative treatment plans for citizens. New autonomous mobility solutions could transform public transport infrastructure, making it more efficient and cheaper to run. And AI can help elderly people vulnerable to loneliness to interact and share their experiences.
So in every sector and in every sphere of life AI is changing the art of the possible. Exciting? Yes. Challenging? Undoubtedly. But no-one can avoid the impact of what AI brings. This is no longer an issue for the future. AI is real and happening today.
This article was co-written with Jean-Pierre Bokobza, Senior Managing Director, Accenture
TAGS:
Pleas find the lin to the white paper here.
Over the last few years, the web giants have shown that using data to know your customers is key for developing new products and services and for beating the competition. These companies operate on a digital core, allowing data-augmented and data-driven decision-making, and are highly appreciated by investors given their massive market capitalisations. Even during the COVID-19 pandemic, the tech sector continued growing spectacularly given the acceleration in digitalising the way we work and interact with others.
Any large, mature company is inspired by the way tech companies operate and dominate. For example, in France, L’Oréal aims to become the top beauty tech company by using artificial intelligence and augmented reality. Since 2018, Carrefour has used the Carrefour-Google Lab to accelerate its digital transformation. Danone and Microsoft launched The AI Factory for Agrifood in 2020. Energy companies like Engie and EDF are pushed by the general public sentiment on climate change to become operationally excellent and greener. Young data talents are hired to help transform the companies and introduce the new data culture.
Predictive mindset
In theory, the smart use of data and the creation of business value makes a lot of sense, though in practice traditional companies struggle within becoming more data-driven. Companies have been investing largely in data infrastructure over the last years, appointed chief data officers, and launched data training programs to convince every employee of the salient features of data and analytics. Consequently, massive amounts of data are stored in the cloud and often the question now is “what can we do with this?” or “what is the actual return on all these data investments?”. To answer such questions, a next step in the data maturity process of the company is essential. This is the step towards becoming more data-informed, data-driven, and operationally excellent, and it requires using data to look forward, rather than backward, and therefore make predictions. To put it differently, after storing and categorising data, it is now time to use it for decision-making across all levels, rather than specific pockets, of the organisation. In fact, business decisions always implicitly include predictions, and it is time to make this process more formal and automatic thanks to the use of data.
The predictive paradigm is not only about recommendation algorithms and the like: it also allows for the use of data at the highest executive level to ensure that strategy is implemented. Specifically, the management of forward-looking key performance indicators (KPIs) allows for measuring and tracking the success of the company and setting clear objectives. This in itself generates valuable data that can be correlated with new initiatives to predict their success and gain deeper understanding of their link with existing operations. To sum it up, C-level executives need to start implementing strategy with data rather than strategy for data, so that the company’s operating model can become data-centric in the same way as famous tech companies like Amazon and Ali Baba.
Someone once said, “Making predictions is hard, especially about the future”. Predictions are by nature uncertain and this has to be incorporated when making business decisions, similar to financial investors that use more information than the average return on an asset for deciding to buy or sell it. Accurate predictions are obtained when combining varying sorts of data, including external sources like weather-related data in energy applications. How to actually produce predictions using data is not a trivial task. It demands talented data scientists and advanced algorithms, plus continuous performance monitoring. It is not a surprise that the International Data Corporation expects global spending on artificial intelligence to increase from 43 billion EUR to 94 billion by 2024.
Renewables
The International Energy Agency expects renewables to provide 80% of the growth in global electricity demand through 2030. In fact, solar- and wind-energy projects have become less expensive, and interest rates are historically low today. Furthermore, governments are highly supportive, exemplified by the European Green Deal, whose main ambition is making the EU climate neutral by 2050. Renewable energies are therefore becoming a key strategic goal for the energy companies. Technically speaking, renewable energies (in particular wind and solar) have a high level of intermittence (night, absence of wind) but we cannot increase the number of plants to compensate for this lack of production for economic and environmental reasons. This situation implies two main actions for energy companies such as ENGIE. First, identify the best sites for new implementations. Second, get the best performance from the plants while taking into account operating constraints (noise for example) for a higher volume of electricity generated.
Data plays an essential role in the success of renewable development because it enables: the best selection of the optimal sites based on topography and weather forecast data, the best results based on technical availability, real-time weather and measurement data related to environmental constraints (noise), and optimizing of electricity sales by combining production data with data on demand, market prices, and storage capacities.
In terms of return on data investments, renewable energy forms a perfect use case where the value from data can be made explicit. In fact, these sources of energy are highly sensor equipped allowing for predictive maintenance, and for data monetisation. On the production side, there are no GDPR concerns. Operational excellence in the renewable energy business will be the only way to survive for incumbents. Traditional oil companies such as B.P. and Total are rapidly transforming themselves and will compete fiercely with the current energy players in the market of renewable energy.
ENGIE — ESSEC
Engie and ESSEC Business School have been working on different cases for three years as part of the Strategic Business Analytics Chair sponsored by Accenture. The Chair’s main objective is to train the next generation of leaders to develop new business strategies, leveraging the numerous applications of advanced analytics. Through a hybrid learning method based on innovation, collaboration and entrepreneurship, the Chair acts as the core of an ecosystem combining data and value creation – from purpose and strategy crafting to transformation, encompassing problem solving, data science & artificial intelligence, culture change and skills development.
Engie is an important part of the Strategic Business Analytics Chair’s ecosystem. In 2021, the Chair students will work on two strategic cases on renewable energy. Their fresh and forward-looking vision on the topics generates innovative ideas and valuable solutions. ESSEC students are particularly interested in working with companies like Engie given its strong environmentally-oriented strategic values. Indeed, the students, being concerned about climate change, prefer to work on business cases that ultimately generate societal value rather than purely commercial cases for e-commerce platforms.
Looking forward
In the future, renewable energy will lead to the creation of many new jobs requiring technical, data and analytics skills. The EU reports that already the solar photovoltaic industry alone accounted for 81,000 jobs with expected increase to 175,000 and 200,000-300,000 jobs in 2021 and 2030 respectively. Digitalisation and renewable energy go hand in hand and will be an important driver for economic growth. The Partnership with Engie and ESSEC will guarantee that young talents are trained and acquire the skills to make sure the transition to a green society is completed satisfying the climate change agreements.
More generally, it will be the companies which employ the people with the right skills, mindset and vision that will make the difference. Data is now available, most analytics tools that create value are standard, and computational resources have little constraints. It is the culture of the company that requires a fundamental change. Those who will be able to attract young “data ready” business graduates are going to be at the competitive edge.
Accenture mentions in its Technology Vision 2020 that the tech-clash is a new situation, where on one hand, people are enthusiastic about technology, data and artificial intelligence, but on the other hand, they require algorithms to be understandable and fair, and know where their personal data is used. This balance will be extremely important in the post COVID-19 era, where all that matters will be human experiences.
TAGS:
Since the start of the COVID-19 pandemic, companies have had to accelerate their digital transformation. This implies increased investments, so substantial that they require C-level support. The stakes are high for organizations. From accelerating sales to optimizing operational processes, digital impacts the value chain in every aspect. If the digital revolution generates an inevitable modernization of companies and a hope of value generation, it also provokes a major challenge for organizations: Data.
Data from transactions, customers, products, etc. invades the daily operations of organizations, constituting a potentially valuable asset, but above all an important challenge in terms of governance and management. Organizations must increase the understanding of these data as part of their transformation.
In the very short term and in an uncertain time, data becomes more crucial than ever to identify the levers of performance of companies. Optimizing costs, increasing business revenues, and driving process efficiency are all initiatives based on the availability of relevant data. As the decision cycles accelerate, many decision-makers will no longer be able to drive their businesses with approximate and often inaccurate data. Having good data - and just in time - has become a pressing necessity. But this prospect seems attainable only if the data heritage is better mastered. This is precisely the purpose of the "Data Footprint" method designed by Kearney and Essec. Evaluating the data footprint now constitutes an essential approach to secure investments and increase control over data assets.
The Data Footprint approach introduces a virtuous practice that aims to understand the data heritage, risks, challenges and limits linked to data within organizations. The Data Footprint is an evaluation process based on a 360° analysis of the data required as part of a company initiative steered by the entity in charge of Data Governance.
The aim of the Data Footprint is to assess the data assets to establish a risk assessment score. Based on multiple dimensions of analysis such as data quality or security, our method allows a quantified assessment of the data heritage in an organization. Today, the data heritage is still poorly controlled and exploited in many companies. What is the quality level of critical data sets in the organization (e.g customers/suppliers’ data)? What is the level of risk associated? What is the degree of control and ownership of data in the organization? These questions are often asked by decision makers without concrete answers based on a structured assessment. The complexity of information systems combined with the lack of governance make the data equation often complex and costly.
The Data Footprint allows companies to get a tangible data assessment across multiple dimensions in order to establish a risk score. The purpose of such a measure is to be able to accurately assess areas of weakness and to monitor data heritage improvements. The approach also allows internal and external benchmarks based on a standardized analysis grid.
The strategy for implementing a Data Footprint should be progressive while focusing on the critical data sets in the context of companies’ major programs, projects or business transformation initiatives.
The approach should involve several collaborators, at least representatives of business lines and IT, who jointly use a score sheet based on the following five dimensions:accessibility and availability,quality, ownership,risks, and identification of the future users. The overall score calculated on these five dimensions can range between 0 and 15, the lower the score the higher the risk related to the enterprise initiative.
Consider as an example a company specializing in the distribution of electronic equipment to the general public through its distribution network of more than 2,000 stores. As part of its data strategy, the company decides to launch a priority project that deploys a “Customer-centric” approach in order to increase customer value. The objective is to capture a better understanding of customer preferences in order to meet their expectations. The company anticipates a significant potential risk linked to data (availability, quality, etc.) and decides to launch a Data footprint approach.
The total Data risk score for this company was less than 5 in the evaluation exercise. On the recommendation of the Chief Data Officer in agreement with the rest of the team, the decision to launch the project is postponed pending the implementation of a specific data related action plan. This approach allowed the company to apprehend a major risk related to data on this project. Indeed, a rapid launch of this project without prior assessment would have potentially led to failure with economic consequences (losses estimated at a few hundred thousand euros). The approach also made it possible to initiate collaborative work around the data over the entire duration of this assessment (one month), and thus avoiding internal misunderstandings about the responsibilities of the various stakeholders (Business lines, IT teams, etc.). Finally, a clear action plan could be drawn justifying the investment of technical and human resources to upgrade the information system.
For a more technical version of this article or further details on the Data Footprint, please contact:
Reda Gomery, Vice President, A.T. Kearney, Reda.Gomery@kearney.com
Jeroen Rombouts, Professor, Essec Business School, rombouts@essec.edu
TAGS:
Hiring data scientists is not an easy task - and neither is keeping them. The problem lies in a mutual misperception. On the one hand, data scientists get rigorous training in statistics, machine learning, and coding, and enjoy working in a learning environment tackling specific problems. On the other hand, many companies are looking for data talents that ask the right business questions and can rapidly acquire domain knowledge. Business schools have been developing programs over the last years to produce “data ready” students and therefore fill an important gap in the data skills job market.
Business school students following a data track in their curriculum are technically solid enough to collaborate with “pure” data scientists, and have simultaneously built up business acumen through intensive company-based data cases. In fact, before digging into data, they learn to ask the relevant strategic questions and project how their solution will bring value after passing through the data and analytics process. They understand the importance of processes, technology and culture and they master data storytelling to convince sponsors to scale their solutions.
The learning process to master data value creation relies on a strong framework, fueled by on-the-ground experimentations. It is slow and requires many iterations, trials and errors. Data graduates appreciate this and desire jobs that will continue offering challenging data valorisation problems, supervised by inspiring managers. Executives are realising that such supervised, hybrid learning environments are key if their companies want to succeed in becoming data centric. A natural question to ask then is “how do young data graduates see their dream work environment?”.
We are the first to have conducted a detailed survey to highlight the main aspirations of young data graduates. The results allow us to understand what fundamentally attracts them, how they see their future career and what matters to them on a daily basis. This also allows us to provide some key insights for executives and HR on how to better attract and retain data talents.
It turns out that data graduates are looking for a job where they can overcome interesting data challenges, complete various tasks and, above all, they want to be able to switch between different projects so they can upgrade their skills. Variety and transversality are seen as the fertile ground, rather than focusing too long on a specific topic. Unsurprisingly, data graduates consider the consulting environment as the most attractive; they are less focused on the business sector where they could work and consider large companies as interesting as the tech giants (such as Amazon and Apple), as long as they offer a wide range of projects. Remuneration is not a key argument for young graduates. They are aware of the scarcity of their skills on the market and the associated premium, and they know their remuneration will follow an ascending curve if they develop the right set of skills during their first jobs. More surprisingly, values or mission of their future employer have relatively little impact in terms of attractiveness.
In a nutshell, young data graduates seek above all to develop their expertise through various projects, in an agile and friendly working environment, supported by managers who have a strong tech skill set.
These insights provide an incentive for many companies to review their talent strategies, which often seek to attract students through communication about corporate challenges or HR benefits. On the contrary, the priority to attract and keep data graduates is to develop a culture of continuous learning, allowing a constant development of their skills, and to organize a 2 to 3 year career path for them that allows them to work on a chain of different projects, gradually increasing their accountability. This can be achieved by building and operating the right ecosystem, going beyond the frontiers of the company and beyond internal silos.
The role of the managers also turns out to be essential for attracting and keeping data graduates. Engaging them early on in the recruitment process allows them to go beyond employer-branding communication and attract students with solid arguments about the reality of their future job. Developing their coaching and collaboration skills and their agility in transforming data into value-driven initiatives will allow them to become the role models they need to be to grow and nurture their teams.
Building those two pillars at the right pace will prove the best way to match the expectations of young data graduates in the long run.
A few key figures
63% of young graduates consider “learning” as the most important professional value
47% consider having varied and interesting tasks is the first daily priority for their future job
41% consider “career development opportunities” as their top expectation for HR
83% of young graduates will not pay attention to the company’s mission for their future job
Authors
Fabrice Marque, Executive Director, Essec Business School
Jeroen Rombouts, Professor, Essec Business School
Arnaud Gilberton, CEO, Idoko
Timothy Lê, General Manager, Idoko
We would like to thank Joris Fayard and Kai-Lin Yang for their support in designing the survey and analyzing the data.
Further information can be found on:
TAGS: