Do you use your own products?
We’ve a program called “Lenovo Powers Lenovo,” which operates bidirectionally. Firstly, within Lenovo, we have various internal processes and solutions that are highly effective. As we identify their potential value, we work on productizing them, transforming these solutions into offerings from which our customers can benefit. For instance, my team manages our internal hybrid cloud, which includes public and private cloud infrastructure, as well as on-premise data centers. Over time, we have developed specialized tools that fill gaps in our requirements for running these operations. As we engaged with our sales teams, we discovered that our customers were also facing similar challenges and expressed interest in these solutions.
We develop solutions in-house, and if these solutions align with customer needs, we leverage our proprietary knowledge and intellectual property to package and offer them to customers. This extends to a range of areas, such as our hybrid cloud solutions, TrueScale offerings, and AIOps.
We are actively working on environmental sustainability, governance reporting, and our zero-carbon factory initiatives. We will extend this to customers and, at the same time, integrate these solutions into our own operations. Our “Device as a Service” program is a prime example. This approach operates bidirectionally, not solely focused on using what we have internally but also leveraging what we offer externally within our organization.
This creates a symbiotic relationship, as we gain valuable insights and feedback from our internal user base of 60,000 employees who utilize the same offerings we provide to our external customers, such as DaaS or True Scale Infrastructure as a Service. This feedback loop is of utmost importance to us as it contributes significantly to product improvement. In essence, this bidirectional approach is a core part of our strategy.
How do you balance cost optimization and investing in emerging technologies as a CIO?
I believe it makes sense to approach technology with a business mindset. After all, you don’t have unlimited funds available for technology investments. Therefore, adopting the perspective of a business owner is crucial. When we want to implement something, we create a business case, regardless of the project’s nature.
For instance, if we plan to embark on a transformation project that requires capital expenditures or new capabilities, we always tie these investments to expected gains. This approach remains consistent across various technology initiatives. Whenever there’s a technological change, we evaluate its potential impact on productivity, whether it’s in the finance back office or the sales team’s efficiency. And we can track these data over time. Like any other aspect of business, it’s about striking a balance. Whether it’s one dollar or a million, when viewed from the overall company perspective, the goal is to allocate resources toward the projects that promise the best returns.
What challenges does the adoption of GenAI face within enterprises?
When it comes to adoption, the primary concern that immediately comes to mind is security. It is like the skepticism surrounding public cloud technology about a decade ago. People questioned whether it could truly be secure. Now, we’ve come to understand that the public cloud can indeed be secure.
However, with GenAI, the first and foremost question arises due to its novelty and the unfortunate prevalence of negative incidents. Instances like source code leaks, firmware breaches, or the dissemination of offensive content have garnered significant attention.
Even if you look at some contemporary chat products, their primary disclaimer often reads: “Warning: This may generate false or offensive content.” Now, can you think of any new technology that says it might be bad for you? This is a significant challenge, especially for enterprises where reputation and trust are paramount.
When technology behaves unpredictably or operates uncontrolled, it naturally triggers significant caution among CIOs and business executives. This stands out as their primary concern. Consequently, there’s a growing emphasis on exploring private large language models and ensuring data security and control.
At present, the most significant challenge lies in finding the right balance. On one hand, companies are eager to harness this technology and avoid being left behind by their competitors. On the other hand, they are wary of negative publicity, where the narrative revolves around a loss of control over their technology.
As a result, organizations have a strong drive to establish robust processes, implement advanced technology solutions, and enhance governance to manage this delicate balancing act effectively.
What are your views on the ethics of AI?
This topic has several dimensions to consider, as it’s quite comprehensive. To begin, we should address guidance, and this encompasses different levels of guidance. Primarily, there’s technology guidance – ensuring that we don’t use open models when they involve sharing sensitive enterprise data.
Additionally, we’ve established a product diversity office, which plays a pivotal role, especially in our AI endeavors. It helps us ensure equitable access to our AI solutions. When it comes to the concept of responsibility, it spans numerous dimensions. A fundamental aspect is the necessity to eliminate bias from our AI systems. We approach this meticulously by carefully designing our training processes to avoid any unfair biases.
It’s worth noting that in our context, where we primarily provide enterprise technology services and hardware computing, many issues are less sensitive compared to those faced by other industries. Our focus remains on objective and practical use cases, which typically involve well-defined paths. Therefore, the path toward responsible AI implementation is relatively straightforward for many enterprises that don’t have hot-button issues that require moral judgment.
However, I believe that it’s crucial, especially from a technological standpoint, to establish certain safeguards. Firstly, a thorough understanding of the data used for training is essential. Implementing techniques for data sampling and quality control is vital because, as the saying goes, “garbage in, garbage out.” Ensuring high-quality inputs is critical because they shape the model’s weighting during training.
Moreover, it’s equally important to monitor the output closely. We must rigorously assess the model’s outputs to the best of our ability because, as previously mentioned, the future remains uncertain. So, when it comes to data ingestion and the training process, there are certain aspects to consider. Given the risk tolerance, quality control is essential, especially for sensitive areas. Most enterprises prefer to focus on objective areas where the risk of issues is lower. For example, in the insurance industry, when adjusting claims or underwriting, you must rely on factual information. The key is to ensure you stay focused on the facts as much as possible and perform checks.
Discussion about this post