When Your Boss Asks for an AI Strategy

I use Hey for my email client. It is a new take on doing email from 37 Signals the developers of Basecamp. They’ve also released a new approach to calendars I’m trying. 

They also added a newsletter piece to the app that is not as obnoxious about collecting data like MailChimp or Constant Contact. It is called Hey World. I publish there occasionally and send a newsletter. You can sign up for the newsletter by clicking the envelop on the web page or click this link.

My last email plays with the question of what happens when your boss calls you and asks for an AI strategy.

You can help support my work at Buy Me a Coffee.

Emory Institute Harnesses AI To Improve Health

Despite all the hype, augmented intelligence (AKA artificial intelligence or AI) is and has been a real thing in programming. The power does continue to grow—just like all technology. Two areas ripe for improvement through the use of AI are education and healthcare. This news relates to healthcare.

Emory University is embarking on a new initiative that will unite the power of machine learning and big data to transform the ways in which health care systems prevent, diagnose, treat and cure diseases on a global scale.

Launching this month under the umbrella of Emory’s AI.Humanity initiative, the Emory Empathetic AI for Health Institute will utilize artificial intelligence (AI) and computing power to discern patterns in vast amounts of data and make predictions that improve patient health outcomes in diseases such as lung, prostate and breast cancer, heart disease, diabetes and more. While AI is already being deployed to improve diagnoses and treatment for numerous health conditions, the resounding impact AI can have on health care is just beginning.

As Georgia’s first institute of its kind, Emory AI.Health will foster the development of accessible, cost-effective and equitable AI tools by developing an ecosystem of multidisciplinary experts from Emory, the Atlanta VA Medical Center, the Georgia Institute of Technology and others, and seeking public-private partnerships to propel new research forward. It will then serve as an engine to deploy those tools to the patient’s bedside, initially within Emory Healthcare and ultimately across the globe.

“AI will transform society and at Emory, we want to use these powerful technologies to save and improve lives,” says Emory President Gregory L. Fenves. “We see the power AI has to facilitate healing while improving equitable access to health care. Dr. Madabhushi is a trailblazer in health-focused AI and the ideal person to lead the Empathetic AI for Health Institute.” 

Emory AI.Health will be led by Anant Madabhushi, PhD, a Robert W. Woodruff professor in the Wallace H. Coulter Department of Biomedical Engineering at Emory and Georgia Institute of Technology, a member of the Cancer Immunology research program at Winship Cancer Institute and a research career scientist with the Atlanta VA Medical Center.

A Peek Under the Covers of ChatGPT and Similar AI Models

This may sound surprising (although it shouldn’t). General media promotes a lot of hype and dire warnings and smoke-and-mirrors about large language models (LLM)—the latest type of augmented (artificial) intelligence. Do you think that if you could even get a peek into the math and technology that you could at least have a better grip on risk and reward?

I have just the book for you. The publicists sent a review copy. I was fascinated.

More than a Chatbot by Mascha Kurpicz-Briki, a professor for data engineering, enables readers to understand and be part of the exciting new development of powerful text processing and generation tools. After reading this book, the reader will be confident enough to participate in public discussions about how new generations of language models will impact society and be aware of the risks and pitfalls of such technologies.

Mascha Kurpicz-Briki is professor for data engineering at the Bern University of Applied Sciences in Biel, Switzerland, and co-leader of the research group Applied Machine Intelligence.

In particular, the book discusses the following questions: How did the field of automated text processing and generation evolve over the last years, and what happened to allow the incredible recent advances? Are chatbots such as ChatGPT or Bard truly understanding humans? What pitfalls exist and how are stereotypes of the society reflected in such models? What is the potential of such technology, and how will the digital society of the future look like in terms of human-chatbot-collaboration?

The book is aimed for a general audience, briefly explaining mathematical or technical background when necessary. After having read this book, you will be confident to participate in public discussions about how this new generation of language models will impact society. You will be aware of the risks and pitfalls these technologies can bring along, and how to deal responsibly when making use of tools built from AI technology in general.

FTC to Host Virtual Summit on Artificial Intelligence

For all you Artificial Intelligence buffs out there, here is a chance to learn a little about what influential people are thinking. This January 25 event will focus on ways to protect consumers and competition.

The Federal Trade Commission’s Office of Technology is hosting a virtual tech summit on January 25, 2024 that will bring together a diverse group of stakeholders to discuss key developments in the rapidly evolving field of artificial intelligence (AI), looking across the layers of technology related to AI.

The summit will bring together representatives from academia, industry, civil society organizations, and government to discuss the state of technology, emerging market trends, and real-world impacts of AI. The discussions will also explore how to cultivate a marketplace that allows both consumers and businesses, including startups and small businesses, to thrive.

FTC Chair Lina M. Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya will provide remarks at the summit. The event will also feature three panel discussions. These include discussions on the hardware and other key infrastructure that will be needed for AI development; issues related to the data and models used in AI; and AI-powered consumer applications.

The summit will begin at noon and take place online. The tentative agenda is available on the event website. Information on how to participate will also be posted to the event page soon.

The Federal Trade Commission works to promote competition, and protect and educate consumers. You can learn more about consumer topics and report scams, fraud, and bad business practices online at ReportFraud.ftc.gov.

IT Infrastructure Integration Company Offers AI Implementation Tips

Indiana-based Matrix Integration has been asserting itself as the AI integration partner of choice for your IT needs. I get a lot of these sort of releases. This one seems to offer a few quality tips.

“We have been leveraging AI tools in our strategic partner software suites for clients for several years. Customers turn to us for support in fine-tuning the automation capabilities within these suites to make critical decisions in their infrastructure,” said Tim Pritchett, engineer operations manager at Matrix Integration. “As time and resources continue to crunch in maintaining your IT systems and security, AI tools can be leveraged to protect your data and get the most benefit out of what you already own.”

Because AI becomes a more commonly built-in component of many managed software suites, here are the top three issues business should consider as AI becomes more universal:

  • Data quality matters. Whether businesses are using AI to generate content (such as drafting communications with customers) or analyze production efficiencies, high-quality data is necessary to train AI models. Already, biased inputs in large-language models like ChatGPT have led to biased outputs that could damage a company’s reputation on a great scale. In the case of data analysis, inaccurate or damaged data fed to an AI model will lead to unusable outputs.
  • Data security isn’t guaranteed.  Companies will need to consider how they will secure their own data, as well as data supplied by clients. This requires asking questions and developing transparency and trust with cloud services providers as well as AI vendors. For example, many businesses provide customer-facing chatbots run by AI. For example, imagine that customers type sensitive or personal data (e.g., bank account numbers) into a chatbot. Or, as another example, a business supplies internal data to AI models to generate proprietary operations solutions. Is that data safe once it gets uploaded into a cloud-based AI application? Can it be used by other customers of that AI vendor?
  • Humans are key for AI to work properly. Right now, much of AI seems to be a “black box” – most people understand the inputs and outputs but are unfamiliar with how learning algorithms work and how they handle data. For example, Microsoft 365 security tools through Defender, Sentinel, or the Purview compliance portal all do an excellent job of leveraging AI to make decisions and inform IT administrators on the best decisions to make in a scenario. However, experienced security professionals can still play a key role by fine-tuning these notifications and building automation for these tools.

Follow this blog

Get a weekly email of all new posts.