CIOReview
| | MARCH 20209CIOReviewevidence in support of its response or actions, users can trust it more completely, and can also indicate whether the evidence itself makes sense in the feedback.End User EnablementThe goal of AI, at its core and across industries, should be to help users optimize the work that they do--whether by solving problems more efficiently or by adopting tactical work and enabling its users to think and operate more strategically.One way we're using AI at Morgan Stanley is to support our financial advisors in managing their client relationships. On one level, we're doing this by employing AI to automate time-consuming manual tasks. Automating these tasks makes our branch staff considerably more efficient, and allows our financial advisors to reinvest their time in building client relationships. We're also using AI to help our financial advisors build these very relationships more strategically through our Next Best Action (NBA) platform. NBA optimizes financial advisors' daily activities with prioritized recommendations that are instantly actionable, employing a task-ranking algorithm that enables financial advisors to dedicate their time to the most valuable tasks at any given moment. For example, NBA would notify a financial advisor of a client's life event such as changing jobs and advise the financial advisor to reach out accordingly, and quickly. Recommendations are ranked in terms of predicted value, the likelihood of the financial advisor and client to act, clients' optimal contact schedules and indicators of potential attrition. NBA's integration with client relationship management applications enables financial advisors to execute recommendations with scale and ease: it takes just a few clicks to initiate bulk client engagement activities such as emailing cybersecurity recommendations or executing on an investment idea that many clients qualify for. The platform constantly learns by tracking which actions financial advisors enact versus ignore, and leverages this learning to constantly improve future suggestions.These are just a few examples from the wealth management side of our business. Responsible Use AI systems should be built with a certain level of self-awareness to ensure that they are only used for tasks they can competently handle and accurately deliver on. To facilitate this, we build algorithms into our systems that constantly score the system's confidence in responding to requests. Confidence scoring helps prevent our AI systems from providing erroneous information to our clients. Complex or ambiguous questions receive lower confidence scores, which indicate when humans are better suited to handling the given task.Effective and responsible AI also necessitates a vigilant safeguarding of client data. To protect our clients, we maintain robust physical, electronic, and procedural safeguards that are designed to guard client information against misuse or unauthorized access.While a common myth conflates machine learning (ML) with AI itself, ML is merely the tool that renders systems artificially intelligent. Humans still need to teach the machine how to learn, and ultimately, a system is only ever as intelligent as the data that underpins it. Our role and our responsibility in creating successful AI systems thus remain central, ongoing and necessarily human. Sal CucchiaraWhile a common myth conflates machine learning (ML) with AI itself, ML is merely the tool that renders systems artificially intelligent
< Page 8 | Page 10 >