Thursday, July 25, 2024

New Masterclass: "Wellness Strategies to Get Back on Track after the Summer" with Loida Garcia-Febo

Wellness Strategies to Get Back on Track after the Summer
A Masterclass with Loida Garcia-Febo

The Library 2.0 Mental Health and Wellness Series

OVERVIEW

The abrupt shift from downtime and relaxation time to the demands of a faster-paced workplace can be stressful and overwhelming. Implementing wellness strategies helps to mitigate this stress, boost productivity, and maintain mental and physical health. These strategies can include strategies to manage stress, prevent burnout and fatigue, effectively complete tasks, use apps and AI to boost well-being, and apply techniques to foster a supportive work environment.

In this masterclass, Garcia-Febo will share top tips directly related to the strategies mentioned above, with the ultimate and ideal goals that these measures contribute to sustained productivity, improved mental health, and greater job satisfaction.

Additionally, attendees will customize self-care and work-life balance for them and will create a personalized toolkit to get back on track.

This 60-minute training is presented by Library 2.0 and hosted by Loida Garcia-Febo. A handout copy of the presentation slides will be available to all who participate.

OUTCOMES:

  • Attendees will learn strategies to manage stress
  • Attendees will learn strategies to prevent burnout and fatigue
  • Attendees will learn strategies to effectively complete tasks
  • Attendees will become familiar with apps and free AI to boost their wellbeing
  • Attendees will become familiar with techniques to foster a supportive work environment
  • Attendees will become acquainted with Emotional Intelligence
  • Attendees will customize basics for self-care and work-life balance
  • Attendees will create a personalized “Toolkit for Getting your Wellness Back on Track”

DATE: Wednesday, August 14th, 2024, at 2:00 pm US - Eastern Time

COST:

  • $99/person - includes any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.
  • This masterclass is not included in the individual or group annual all-inclusive all-access passes for the Dr. Steve Albrecht Service, Safety, and Security webinars.

TO REGISTER: 

Cliick HERE to register. You can pay by credit card, and you will receive an email within a day with information on how to attend the event live and then your permanent access to the event recording. 

If you have any trouble registering for an event, if you need to be invoiced or pay by check, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within 24 hours.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the masterclass (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.

RECORDING: The masterclass will be recorded and registered participants will have non-expiring access to the recording.

LOIDA GARCIA-FEBO

Loida Garcia-Febo is a Puerto Rican American librarian and International Library Consultant with 25 years of experience as an expert in library services to diverse populations and human rights. President of the American Library Association 2018-2019. Garcia-Febo is worldwide known for her passion about diversity, communities, sustainability, innovation and digital transformation, library workers, library advocacy, wellness for library workers, and new librarians about which she has taught in 44 countries. In her job, she helps libraries, companies and organizations strategize programs, services and strategies in areas related to these topics and many others. Garcia-Febo has a Bachelors in Business Education, Masters in Library and Information Sciences.

Garcia-Febo has a long history of service with library associations. Highlights include- At IFLA: Governing Board 2013-2017, Co-Founder of IFLA New Professionals, two-term Member/Expert resource person of the Free Access to Information and Freedom of Expression Committee of IFLA (FAIFE), two-term member of the Continuing Professional Development and Workplace Learning Section of IFLA (CPDWL). Currently: CPDWL Advisor, Information Coordinator of the Management of Library Associations Section. Currently at ALA: Chair, IRC United Nations Subcommittee, Chair Public Awareness Committee. Recently at ALA: Chair, Status of Women in Librarianship and Chair, ALA United Nations 2030 Sustainable Development Goals Task Force developing a multi-year strategic plan for ALA. Born, raised, and educated in Puerto Rico, Garcia-Febo has advocated for libraries at the United Nations, the European Union Parliament, U.S. Congress, NY State Senate, NY City Hall, and on sidewalks and streets in various states in the U.S.

Wednesday, July 24, 2024

New Webinar: "Better Meetings: How to Have More Fun and Be More Effective When You Gather"

Better Meetings:
How to Have More Fun and Be More Effective When You Gather

Part of the Library 2.0 Service, Safety, and Security / Essential Librarian Series with Crystal Trice, hosted by Dr. Steve Albrecht

OVERVIEW

We spend up to a third of our time at work in meetings. Unfortunately, much of that time is spent watching the clock and thinking that this meeting could have been an email. This webinar will equip you with a toolkit of tips and tricks to transform your library meetings from “have-to” to “get-to” events. You will leave with practical strategies to make your meetings more engaging, productive, and (actually!) enjoyable.

LEARNING AGENDA:

  • How to craft a purpose statement for all types of meetings
  • Ideas for matching meeting agenda and methods to serve that meeting’s purpose
  • Time-tested techniques to help facilitate inclusive meetings that stay on track
  • Best practices for summarizing meeting outcomes and assigning tasks for accountability
  • Ways to proactively change meeting culture at your library from any role within the organization

This 60-minute overview session on is another in our “Essential Librarian Series,” designed to be shown to new staff and leaders and to provide a refresher for all who work in the library. The presentation slides will be available to all who participate.

DATE: Thursday, August 8th, 2024, at 2:00 pm US - Eastern Time

COST:

  • $99/person - includes any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.
  • FREE for those on individual or group all-access passes (see below).

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES:

  • All-access annual passes include unlimited access to the recordings of all of Dr. Albrecht's previous Library 2.0 webinars, plus live and recorded access to his new webinars for one year. These are hosted either at Library 2.0 or Niche Academy (if preferred).
  • For a $499 individual all-access annual pass to all of Dr. Albrecht's live webinars and recordings for one year, please click here
  • Inquiries for all-access organizational contracts should be directed to admin@library20.com.
CRYSTAL TRICE

With over two decades of experience in libraries and education, Crystal Trice is passionate about helping people work together more effectively in transformative, but practical ways. As founder of Scissors & Glue, LLC, Crystal partners with libraries and schools to bring positive changes through interactive training and hands-on workshops. She is a Certified Scrum Master and has completed a Masters Degree in Library & Information Science, and a Bachelor’s Degree in Elementary Education and Psychology. She is a frequent national presenter on topics ranging from project management to conflict resolution to artificial intelligence. She currently resides near Portland, Oregon, with her extraordinary husband, fuzzy cows, goofy geese, and noisy chickens. Crystal enjoys fine-tip Sharpies, multi-colored Flair pens, blue painters tape, and as many sticky notes as she can get her hands on.

 

DR. STEVE ALBRECHT

Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

In 2015, the ALA published his book, Library Security: Better Communication, Safer Facilities. His new book, The Safe Library: Keeping Users, Staff, and Collections Secure, was just published by Rowman & Littlefield.

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.

He has written 25 books on business, security, and leadership topics. He lives in Springfield, Missouri, with six dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

Friday, July 19, 2024

This Week in AI with Reed Hepler and Steve Hargadon (July 19, 2024)


We've released our newest "This Week in AI" recording, back on Fridays. Hope you enjoy! AI summary provide by summarize.tech: https://www.summarize.tech/youtu.be/nm0yXzk9HF8.


00:00:00 - 00:40:00

In the July 19, 2024 episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discussed recent developments in AI, including the release of Gpt-4 Mini, a smaller and faster version of the popular AI model, and the state of AI in education. They also explored the influence of AI on culture and the potential manipulation of large language models, the current state of AI and its relationship to human skills, and the fragility and potential dangers of over-reliance on technology. The hosts emphasized the importance of using AI as a collaborative tool, focusing on specific productivity tools rather than grand promises, and maintaining realistic expectations. They also touched upon the ethical considerations of creating deep fakes and the potential risks of centralized control and technology failures.

  • 00:00:00 In this section of the "This Week in AI" video from July 19, 2024, hosts Steve Hargadon and Reed Hepler discuss recent developments in the field of AI. First, they discuss the release of Gpt-4 Mini, a smaller and faster version of the popular AI model, which is designed for use in business applications and apps. The Mini version is less cumbersome to use and is being made available online for people to use. However, its effectiveness is still being debated, with some users reporting that it can answer questions effectively but may forget the last steps of complex prompts. The hosts also mention the release of a survey on the state of AI in education, which showed that only 46% of teachers and 36% of students believe that AI will be helpful in education. Despite this, 56% of educators plan to be more deliberate and pragmatic in their use of AI. The hosts suggest that people may not be using AI effectively or productively due to a lack of understanding of how to use it effectively.
  • 00:05:00 In this section of the "This Week in AI" video from 19 July 2024, the hosts Reed Hepler and Steve Hargadon discuss the use and perception of AI in education. Hepler mentions a survey by Quizlet, an Edtech company, which found that half of their audience doesn't use AI at all, and those who do often use it minimally. Hargadon shares another study where students using a specialized GBT tutor performed better than those using a regular chatbot model or having no access to AI at all. The hosts agree that the role of AI and how it's perceived shapes its effectiveness. They also emphasize the importance of proper training and framing when using AI in education to avoid unrealistic expectations and misunderstandings.
  • 00:10:00 In this section of the "This Week in AI" YouTube video from July 19, 2024, Steve Hargadon and Reed Hepler discuss the influence of AI on culture and the potential manipulation of large language models. Hargadon expresses concern over the shaping of responses by those in power and control, citing examples from China and the United States. He argues that the framing of AI is crucial in education and that people are becoming overly trusting of AI's human-like responses and consciousness. The conversation also touches on the impact of AI on families, with some children developing emotional attachments to AI tools like Alexa. Reed Hepler encourages listeners to read an article by Clance Elliot in Forbes Magazine for further insight into the topic.
  • 00:15:00 In this section of the "This Week in AI" video from July 19, 2024, Reed Hepler and Steve Hargadon discuss the current state of AI and its relationship to human skills. Hepler mentions that some people have reached a trough of disillusionment with AI, but this is only the case if they had unrealistic expectations. Hargadon adds that people are still trying to understand the vast capabilities of AI and that it's essential to recognize its limitations. They also discuss a study that found language models like ChatGPT memorize more than they reason, emphasizing the importance of understanding AI's data-driven nature. The conversation then touches on the human tendency to perceive AI as conscious and accurate, even when it may not be. The episode concludes with news about a representative from Virginia using AI to restore her voice after losing it.
  • 00:20:00 In this section of "This Week in AI - 19 July 2024", hosts Reed Hepler and Steve Hargadon discuss the advancements in AI technology that allow it to recreate a person's voice and speaking style with remarkable accuracy. They share an example of someone's speech being generated in Tucker Carlson's voice and posted on TikTok, which went unnoticed by most commenters. The hosts ponder the implications of this technology, including the potential for creating deep fakes of deceased loved ones and the ethical considerations of building relationships with AI personalities that mimic real people. They also touch upon the possibility of AI's predictive ability and the potential impact on human relationships.
  • 00:25:00 In this section of the "This Week in AI - 19 July 2024" YouTube video, Reed Hepler and Steve Hargadon discuss OpenAI's alleged roadmap to AGI (Artificial General Intelligence), which includes five levels: chatbots, reasoners, agents, innovators, and organization-wide AI tools. Hepler expresses skepticism about the plausibility of this roadmap. Steve Hargadon adds that OpenAI might be presenting this roadmap to alleviate safety concerns and that the company has a history of making surprising announcements. They also touch upon the potential dangers of a fully reasoning AI, which could expose power structures and manipulation, and the fragility of the electronic universe, including the potential risks of an EMP (Electromagnetic Pulse) that could take out most of the electronics in an area.
  • 00:30:00 In this section of the "This Week in AI" video from July 19, 2024, the hosts Steve Hargadon and Reed Hepler discuss the fragility and potential dangers of over-reliance on technology, specifically AI. They reflect on the impact of technology failures, such as the blue screen of death, which they compare to the Y2K issue. Reed Hepler shares his personal experiences with technology-related screens of death. The conversation then shifts to the risks of centralized control and over-reliance on technology in various industries, including transportation and finance. Steve Hargadon adds that the rapid growth and development of technology, particularly AI and supercomputers, increase the risks and make it challenging to ensure backup systems and prevent potential catastrophic failures. The hosts also touch upon the over-promising of AI capabilities and the importance of realistic expectations.
  • 00:35:00 In this section of the "This Week in AI" video from July 19, 2024, Steve Hargadon discusses the importance of focusing on specific productivity tools rather than grand promises of increased productivity through AI. He uses the example of the Covid-19 pandemic and how it has become integrated into daily life, and compares it to the integration of AI into various tools and applications. Hargadon emphasizes the need to remember that language isn't logic and that humans are ultimately responsible for the output of AI tools. Reed Hepler adds to the conversation by reflecting on the public discourse around Covid-19 and AI, and expressing his belief that AI is a long-term story that will become pervasive in what we do.
  • 00:40:00 In this section of the "This Week in AI" YouTube video from July 19, 2024, hosts Steve Hargadon and Reed Hepler discuss the theme of over-promising and the need for realistic expectations when it comes to AI. They emphasize the importance of using AI as a collaborative tool for research and productivity, while also acknowledging the potential for dependency on the technology. Hargadon uses the example of cars and cell phones to illustrate how humans have adopted and become dependent on technologies that we don't fully understand or have the ability to create ourselves. The hosts conclude by acknowledging that only time will tell if our dependence on AI is the right thing.

Wednesday, July 17, 2024

"School Libraries and AI:" A School Library Summit from Library 2.0 on August 22, 2024

OVERVIEW:

Our first Library 2.0 School Library Summit is "School Libraries and AI," and will be held online (and for free) on Thursday, August 22nd, 2024, from 12:00 - 3:00 pm US-Pacific Time.

Join us for an exciting and transformative mini virtual conference, "School Libraries and AI," designed specifically for school librarians and educators passionate about the future of library services. This event will explore the integration of artificial intelligence in school libraries, offering innovative strategies and practical insights to enhance learning and teaching experiences.

Our special conference chair is Elissa Malespina, writer of the AI School Librarians Newsletter. Following the opening keynote, we'll have three half-hour slots of sessions led by experienced school librarians who are at the forefront of AI integration, designed to provide practical, hands-on knowledge that you can apply in your own library and classroom.

We look forward to gathering online with you for this event!

REGISTRATION:

This is a free event, being held live online and also recorded.
REGISTER HERE
to attend live and/or to receive the recording links afterward.
Please also join the Library 2.0 community to be kept updated on this and future events. 

Everyone is invited to participate in our Library 2.0 conference events, which are designed to foster collaboration and knowledge sharing among information professionals worldwide. Each three-hour event consists of a keynote panel, 10-15 crowd-sourced thirty-minute presentations, and a closing keynote. 

Participants are encouraged to use #library20 and #schoollibrariesandai on their social media posts about the event.

CONFERENCE CHAIR:

Elissa Malespina
The AI School Librarians Newsletter
OPENING KEYNOTE PANEL & SPECIAL ORGANIZER

Elissa Malespina is an award-winning school librarian, educational consultant, and advocate for technology integration in education. She writes The AI School Librarians Newsletter, where she shares insights on leveraging artificial intelligence to enhance library services and educational outcomes. Known for her innovative approach, Elissa has successfully implemented technology-driven initiatives that enrich student learning, such as virtual debates. Her work in educational equity includes testifying before the NJ Assembly Education Committee on the challenges faced by school librarians. Elissa is also featured in the book "Trouble in Censorville." Additionally, she runs her consulting company and provides professional development. Learn more about Elissa and her contributions at elissamalespina.com.

 

CALL FOR PROPOSALS:

The call for proposals is now open! We have a short turn-around time for this event, so we encourage you to submit quickly!

We request that session proposals are designed to provide practical, hands-on knowledge that can be applied a library and/or classroom, including:

  • AI tools for enhancing library management and operations.
  • Using AI to create personalized learning experiences for students.
  • Ethical considerations and digital citizenship in the age of AI.

Submit a proposal HERE.

SUPPORT:

Tuesday, July 16, 2024

New Webinar: "Workplace Violence Prevention: Following and Emulating California’s Mandates for a Safe Library"

Workplace Violence Prevention:
Following and Emulating California’s Mandates for a Safe Library
Part of the Library 2.0 Service, Safety, and Security Series with Dr. Steve Albrecht

OVERVIEW

Workplace violence comes to libraries in many concerning forms, including threats or assaults on staff; patrons assaulting each other; and domestic violence involving patrons or even crossing over from home to work with staff. Sadly, we have seen shootings and homicides involving patrons in or around our public libraries move from quite rare to more common, and have seen police officers killed responding to calls with armed people inside libraries.

As of July 1, the legislators in California enacted Senate Bill 553, which now mandates workplace violence training, policies, incident logs, site security assessments, and demonstrated due diligence in recognizing and responding to threats made to public facilities and private organizations and their employees. This law covers every employer in California with 10 or more employees, which means most public libraries must follow the new guidelines, which will be enforced by OSHA. As with most new and required legislation of this type, its origin was based on a shooting incident in San Jose at a light rail facility in 2021, that ended with the death of nine employees at the hands of a co-worker.

While the legislation is specific to California, the principles and policies are worth considering by all libraries. This session will help library leaders and employees, not just in California but in every state, to better understand and respond to the potential for workplace violence in all its forms.

Dr. Albrecht will provide a template, based on the California requirements, that all libraries can follow to make their buildings safer, build awareness of the behavioral problems most likely to lead to violence, and for California libraries specifically demonstrate compliance with the national OSHA standard long known as “the duty of care,” which states: “All employers must provide a work environment free from recognized hazards that are causing or are likely to cause death or serious physical harm."

LEARNING AGENDA:

  • Creating written operational plans and policies, supported by a “Workplace Violence Prevention Plan Administrator.”
  • How to collect and maintain event and injury records.
  • Employee training; threat reporting; restraining order education.
  • Demonstrate due diligence on all reported threats.
  • Domestic violence as a workplace issue.
  • Site security assessments for hazards; how to make corrections.
    Post-incident response, including how to find trauma counseling.

This 60-minute session is another in our Safe Library Series. The presentation slides will be available to all who participate.

DATE: Thursday, August 1st, 2024, at 2:00 pm US - Eastern Time

COST:

  • $99/person - includes any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.
  • FREE for those on individual or group all-access passes (see below).

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES:

  • All-access annual passes include unlimited access to the recordings of all of Dr. Albrecht's previous Library 2.0 webinars, plus live and recorded access to his new webinars for one year. These are hosted either at Library 2.0 or Niche Academy (if preferred).
  • For a $499 individual all-access annual pass to all of Dr. Albrecht's live webinars and recordings for one year, please click here
  • Inquiries for all-access organizational contracts should be directed to admin@library20.com.
DR. STEVE ALBRECHT


Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

In 2015, the ALA published his book, Library Security: Better Communication, Safer Facilities. His new book, The Safe Library: Keeping Users, Staff, and Collections Secure, was just published by Rowman & Littlefield.

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.

He has written 25 books on business, security, and leadership topics. He lives in Springfield, Missouri, with six dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

Saturday, July 13, 2024

This Week in AI with Reed Hepler and Steve Hargadon (July 12, 2024)

We've released our most recent "This Week in AI" recording. Hope you enjoy! AI summary provide by summarize.tech: https://www.summarize.tech/www.youtube.com/watch?v=EjjqA3XpaCI.

In the July 12, 2024 episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discuss their personal experiences with AI's role in productivity and share news about the latest developments in the field. They reflect on the shift towards using AI for productive purposes and the growing trend among librarians to use AI for conversational search. In the news segment, they cover a study questioning the data analyzing abilities of Google's Gemini AI and the importance of verifying the data processing powers of AI tools. The hosts also discuss the use of AI chatbots in schools, the potential consequences of relying on AI technology, and the role of AI in fact-checking and medicine. They express excitement about the potential of AI in diagnosing diseases and creating proteins for drugs but raise concerns about peer review, patent ownership, and regulatory responses. Throughout the conversation, they emphasize the importance of adapting to the rapidly changing technological landscape and focusing on the techniques behind the tools rather than the tools themselves.

  • 00:00:00 In this section of "This Week in AI" from July 12, 2024, hosts Steve Hargadon and Reed Hepler discuss their personal experiences with AI's role in productivity and share news about the latest developments in the field. Both hosts have noticed a shift towards using AI for productive purposes rather than just for fun. They also mention the growing trend among librarians to use AI for conversational search instead of database search. In the news segment, they cover a study revealing that Google's Gemini AI may not live up to its claimed data analyzing abilities, and the increasing importance of verifying the data processing powers of AI tools as companies make bold promises. The hosts acknowledge the pressure on companies to present a positive image and the potential for overpromising.
  • 00:05:00 In this section of the "This Week in AI" video from July 12, 2024, Steve Hargadon and Reed Hepler discuss the limitations and potential of large language models in handling logic and data analysis. Hepler explains that the misconception lies in the belief that AI can perform logic independently, while humans must provide context and control the conversation. They also touch upon the development of AI products like "eternal you," which can create a likeness of a person and bring comfort to users. Hepler expresses skepticism about the desire for such technology, while Hargadon sees it as a potential emotional fulfillment tool. The conversation then shifts to the Turing test and the idea that AI may surpass human perception and intellectual capacity, leading to questions about the impact on human skills and incentives to think deeply.
  • 00:10:00 In this section of the "This Week in AI" video from July 12, 2024, Reed Hepler and Steve Hargadon discuss the use of AI chatbots in schools, specifically mentioning the Los Angeles school district. The chatbot was intended to provide information to parents and students about grades, homework, and district news. However, the company behind the chatbot went bankrupt, and there were concerns about compromised data. Hepler expresses his belief that humans are necessary for handling sensitive information related to children. Steve Hargadon adds that Los Angeles has a history of spending large sums of money on technology that fails to improve student outcomes. The conversation then shifts to the potential of AI in improving government services, with Tony Blair, a former Prime Minister, advocating for its use in a recent report. Despite the potential benefits, both speakers caution against over-reliance on AI and the importance of careful consideration before implementation.
  • 00:15:00 In this section of the "This Week in AI" video from July 12, 2024, Steve Hargadon and Reed Hepler discuss the rush to implement AI technology and the potential consequences, particularly in the context of Google searches. The speakers express concern over the exaggeration of AI capabilities and the impact on industries like content creation. They also discuss the decline in mobile searches due to AI overviews providing answers directly, causing people to bypass visiting websites. This trend raises concerns about the death of expertise and the potential loss of revenue for content creators. Despite the flaws in the current AI overviews, the convenience of having answers delivered instantly is a significant draw for users.
  • 00:20:00 In this section of the "This Week in AI" video from July 12, 2024, Reed Hepler and Steve Hargadon discuss the reliance on AI tools and the importance of accuracy. Hepler shares his concern about the use of AI to assess the accuracy of AI summaries, creating a dependency on multiple AI systems. Hargadon adds to the conversation, reflecting on human behavior and the tendency to choose ease over responsibility. He argues that this trend will continue with AI, as people may prioritize convenience over accuracy or long-term impact. The conversation touches on the Amish theme of evaluating technology's impact and the potential need for legislation to address these issues.
  • 00:25:00 In this section, Steve Hargadon discusses the potential of AI in fact-checking and improving its own capabilities. He shares his experiment of creating a custom GPT model preloaded with books on historical misrepresentations to evaluate news stories with skepticism. The AI performed well in responding to specific stories, leading Hargadon to believe that future iterations could help improve accuracy. Reed Hepler adds that many AI models, including OpenAI and Clot, use models to train other models, creating a reinforcement learning process without human intervention. The conversation then shifts to the intersection of AI and genetics, with Reed Hepler highlighting the recent development of a gigantic AI protein design model by X-Meta, which aims to identify and create proteins relevant to disease research and mutation research. The model, created by Evolutionary Scale, has been made compatible with Crispr technology and is expected to change the field of medicine and drug development. Additionally, AI is being used to analyze the genetics and DNA of tumors to understand their functioning and response to treatment.
  • 00:30:00 In this section of the "This Week in AI" YouTube video from July 12, 2024, Reed Hepler and Steve Hargadon discuss the potential impact of artificial intelligence (AI) on various industries, specifically medicine and pharmaceuticals. They express excitement about the potential for AI to diagnose diseases and create proteins for drugs, but also raise questions about peer review, patent ownership, and regulatory responses. Hargadon also emphasizes the importance of adapting to the rapidly changing technological landscape and focusing on the techniques behind the tools rather than the tools themselves. Hepler agrees, and they both look forward to exploring these topics further.

Wednesday, July 10, 2024

New Safe Library Webinar - "The Verbal Judo Workshop: Communication and De-Escalation Tools for All Library Staff"

The Verbal Judo Workshop:
Communication and De-Escalation Tools for All Library Staff
Part of the Library 2.0 Service, Safety, and Security Series with Curtis Smith and hosted by Dr. Steve Albrecht

OVERVIEW

The concept of Verbal Judo was created by Dr. George Thompson over 40 years ago, and it is the basis for his bestselling book, Verbal Judo: The Gentle Art of Persuasion. Thompson and his instructors have taught this communication and de-escalation approach to over one million people around the world. 

The function of this session is to give library leaders and staff at all levels the tools to improve employee safety; enhance professionalism; decrease complaints; minimize liability; improve service and communication outcomes, increase staff morale; and lessen both staff and patron stress.

Verbal Judo can help by being an effective program to de-escalate and persuade patrons into voluntary compliance. The goal is to leave people better than if we found them, even if we saw them at their worst.

This training can help keep all library staff safe, provide the tools they need to be successful communicators and prevent burnout. This session can also help internal communications and interactions between teams. 

LEARNING AGENDA:

  • Learn to control your emotions in difficult service situations with patrons.
  • Apply proven techniques to calm challenging patrons and those not of clear, rational mind, but needing help.
  • Learn the key ingredients to persuade patrons to change their behavior.
  • How to read patrons, identify what motivates them, and use that for long-lasting compliance.

This 60-minute overview session on how to respond step-by-step to service situations in the library is another in our “Essential Librarian Series,” designed to be shown to new staff and leaders and to provide a refresher for all who work in the library. The presentation slides will be available to all who participate.

DATE: Thursday, July 25th, 2024, at 2:00 pm US - Eastern Time

COST:

  • $99/person - includes any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.
  • FREE for those on individual or group all-access passes (see below).

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card, and will receive an email within a day with information on how to attend the webinar live and how you access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you have any trouble registering for a webinar, if you need to be invoiced or pay by check, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES:

  • All-access annual passes include unlimited access to the recordings of all of Dr. Albrecht's previous Library 2.0 webinars, plus live and recorded access to his new webinars for one year. These are hosted either at Library 2.0 or Niche Academy (if preferred).
  • For a $499 individual all-access annual pass to all of Dr. Albrecht's live webinars and recordings for one year, please click here
  • Inquiries for all-access organizational contracts should be directed to admin@library20.com.
CURTIS SMITH

Curtis Smith is the Vice President of Training for the Verbal Judo Institute. He has taught the concept for the past 17 years. His wife is a librarian.

DR. STEVE ALBRECHT

Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

In 2015, the ALA published his book, Library Security: Better Communication, Safer Facilities. His new book, The Safe Library: Keeping Users, Staff, and Collections Secure, was just published by Rowman & Littlefield.

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.

He has written 25 books on business, security, and leadership topics. He lives in Springfield, Missouri, with six dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

Thursday, July 04, 2024

The Generative Approach to Education

THE PARADOX OF EDUCATION


Let’s start with what we might call the basic Paradox of Education.


One side we can call individual-centered education: the ultimate goal is for the learner to be increasingly in charge of their own learning, with education helping students to develop critical thinking, creativity, and independence. In this model, technological advances increase the capacity for self-directed learning.


The other side we can call institutional-centered education: this is the mandatory educational system, which can be rigid, standardized, and focused on assessment rather than learning. In this model, technological advances are seen as needing to be controlled.


The paradox of education is the tension between these two ideas–between fostering individual development on the one side and meeting the needs of the system on the other. This distinction is weighed and measured by Plutarch’s familiar quote, “The mind is not a vessel to be filled, but a fire to be kindled.” Or when we talk about the importance of learning how to think versus what to think. 


I have often done an exercise with educators that I call the conditions of learning. First I ask about their best personal learning experiences and ask them to turn to their neighbor and tell them about it. Then I ask them as a group share what they discussed, and then to think about and identify the conditions that led to those positive learning experiences, generating a list of those conditions together. The list that we create at this point is almost always very similar across groups: someone believed in me, someone took the time to help me, someone saw potential in me I didn't see in myself, someone challenged me to pursue a goal… And there's usually a good discussion at that point about the difference between our paradoxical ideas, between helping individuals to grow versus mandating academic outcomes. And about how much time they spend on which side.


From my perspective, both the development and the continuation of compulsory public schooling have been explicitly motivated by the ideal of fostering individual growth and the fulfillment of of societal or institutional needs. Both empowerment and control. But I think it’s fair to say that our public discourse somewhat pretends empowerment is the main story, when for most students, I believe it’s an experience of being  controlled.

BEYOND THE PARADOX


In addition to the Paradox of Education we have two other important concepts to address associated with schooling.


The first is the "hidden curriculum." The hidden curriculum refers to the implicit lessons, values, and social norms that students learn in school but which are not explicitly included in the formal curriculum. These lessons are conveyed through the social environment, cultural norms, routines, and institutional practices within the educational setting. The hidden curriculum can include unspoken expectations about beliefs and behavior, including conformity, obedience, punctuality, and competition, as well as understanding of authority and hierarchy and how to act and interact with peers and authority figures, and understanding one's place in society.


This is closely related to a second important idea: what Plato, related to his views on education, called the "Noble Lie." The Noble Lie is the idea that certain myths or stories should be told to the citizens of the ideal society in order to maintain social harmony and promote the common good. According to Plato, the Noble Lie is a necessary component of his proposed educational system, as it helps to ensure that individuals are motivated to fulfill their roles and responsibilities within society. You are gold, silver, or brass or iron–that is, you are born with certain innate and immutable qualities, a story designed to help individuals to accept and fulfill their assigned roles. Basically, to learn to “swim in your lane.” 


The legacy of the noble lie leads to the inevitable conclusion that the reason compulsory schooling is pervasive among all modern nations is that compulsory public schooling is actually a governance strategy. Education with this messaging is not conducive to healthy individual growth, but we see it as necessary to maintain social order. This is why the natural act of parents teaching their own children in homeschooling is actually illegal in many countries.

THE SILENCE 


For our purposes today I think the most interesting aspect of the Paradox of Education is the degree to which we don't really talk about it. Again, that we use the language of enlightenment but mostly practice compliance and control. 


If you subscribe to some modern theories about the evolution of intelligence, you probably recognize that we largely use stories, and not logic, to make sense of our world, and that most of the stories we tell aren't actually true ( or the full truth). In this view, evolution doesn't select for truth but for survival, as the great bulk of our evolutionary past took place before the scientific revolutions that have shaped our modern lives. 


In a quote that seems particularly relevant today, E.O. Wilson said, “The real problem of humanity is we have Paleolithic emotions, medieval institutions and god-like technologies.”


Thus some argue that intelligence evolved for social purposes, which would explain why we have to build in such safeguards to get to truth, like peer review systems, having a trial by a jury of our peers, and being innocent until proven guilty.


So in the same way that it is argued that banks actually operate more to make a profit than to help individuals save and manage their money (the story banks tell), and that profit and power are often the actual drivers of most commercial and political endeavors rather than their lofty social messaging, it's probably no surprise that we describe schooling in the language of individual enlightenment and growth when that's not the actual experience for most students.

MY EXPERIENCES


Some years ago, when I was doing my "future of education" interview series, I started asking people, people I’ve heard called regular people (wait staff, haircutters, retail clerks) about their experiences with school, and I found that they would sometimes actually start to cry when I got passed their natural defenses and their normal surface level responses. They would almost inevitably tell me: “I wasn't one of the smart ones.” Or, “I wasn't good at math…:


I think this is because, for a good percentage of students, school does the opposite of what we say it does: or put another way, the thing that school did best with these students was to teach them that they weren't good learners. 


I gave a keynote address on this idea at a statewide education conference, and after I spoke I sat down at the speaker table next to the state superintendent of education. “What did you think of my talk,” I asked. He said, “Well, no matter what you say, the top ten percent of students will always rise to the top.” There’s the noble lie again. He honestly believed it and he had internalized the lie.


So for that top ten percent, as I later discovered while doing a series of interviews with students, school is a game. A game they are very aware of that involves grades and certain teachers and classes and class rankings and college admissions. But it's not like there aren't negative consequences to their psyches from this game: imposter syndrome, broken self-esteem, and sometimes even suicide. And in truth, I think we would admit that these top students aren't usually becoming scholars or deep thinkers, but mostly are just getting trained for and accustomed to being the smart ones–being the ones to succeed in the system, and who have learned that their success often comes with an unstated requirement to live within the Overton Window, where non-conformity can lead to loss of privileges. 


Updating Sinclair Lewis: “It is difficult to get a person to understand something when their salary depends upon their not understanding it.” Much of our modern history can be explained by the difficulty of thinking independently when being rewarded for not doing so.


The sad part is that while the kids who are winning that game usually know it’s a game, the kids who aren't winning don't know that it's a game–they think it's just proof of their being made of brass or iron.

THE CYCLES OF EDUCATIONAL TECHNOLOGY


There's an interesting cycle that occurs when a new technology appears ready to impact education. I suggest that this is a predictable cycle.

I joined the ed tech party in the early 2000s, when the new technologies were open source software, open content, social media, and web 2.0.


I had interviewed Marc Andreesen and Gina Bianchini for my interview series, and they had just started Ning.com. I also collaborated  with Adam Frey, who founded Wikispaces. I’d helped promote both platforms and ended up actually consulting for Gina and Ning for some time. Those were heady days–regular unconferences at the annual ISTE and other conferences and thinking we were going to help reinvent education. We created many opportunities to discuss how the technology was going to open the door to greater student agency and to education as individual enlightenment. It was exciting.


Unexpectedly, the forces against change were not just the weight of existing beliefs and the machinery of education, but also the commercialized endeavors themselves. Commercialization has its own trajectory of ultimately needing to seek profits over authentic change, therefore needing adoption and acceptance by existing financial decision structures.


Wikispaces gets sold, and ultimately what were thousands and maybe even tens of thousands of teacher- and student-created and curated knowledge wikis are not just archived, but deleted from the web. The same thing happens with Ning–management changes that led to the deletion (again, not the archiving) of thousands of educator-created topic-specific communities. 

ARTIFICIAL INTELLIGENCE (AI)


To say that the destruction of all that work was discouraging might not fully capture the actual consequences. Actual accumulated knowledge and wisdom disappeared in the blink of an eye, and true believers also felt the fatigue of losing a vision of change. 


Virtual Reality then came and went as an educational technology more with a whimper than a bang… However, AI seriously rocked the education boat again with a virtual tsunami of interest and excitement when ChatGPT was introduced. 


And so AI represents another significant moment where technology is allowing us to reimagine formal schooling and education again, to have these important discussions again. Many of us who were jaded feel the pull of the conversation anew, but at the sme time we also watch the flurried activity of groups and individuals wanting to be at the forefront of this new cycle– to be the winners of a great race that has started.


If history holds, there will be a time limit to a renewed discussion about the Paradox of Education and the ability for the technology to “reinvent education” before systemic pressures absorb the technology into the compulsory school model. But maybe, a hopeful voice inside me whispers, maybe it will be different this time. So let me propose a framework that might have some value as we explore AI through the lens of the education paradox for whatever amount of time we have to productively do so. 

GENERATIVITY


As it turns out, Generative AI presents us with a fascinating coincidence of language. 


We're using the word generative much more frequently now because of GPT: “generative pre-trained transformer.” Jonathan Zittrain adopted the term generativity in 2006 to refer to the ability of a technology platform or technology ecosystem to create, generate or produce new output, structure, or behavior without input from the originator of the system.


But the word generativity had a prior meaning which in a normal context is only tangentially related to how are we using it in terms of discussing AI.


The psychoanalyst Erik Erikson was the first to use the term generativity, coined by him in 1950 to denote "a concern for establishing and guiding the next generation." He first used the term while defining what he saw is the final stage of our psychosocial development, the Care Stage. He created the term to explain one of two pathways in the middle ages of one's life, from 45 through 64. Generativity was defined by him as the “ability to transcend personal interests to provide care and concern for younger and older generations.” In his theory, Generativity is contrasted with Stagnation. In generativity, people contribute to the next generation through caring, teaching, and engaging in creative work which contributes to society.


In yet another happy coincidence of language and thought, The Seventh Generation Principle is a philosophy originating from the Iroquois Confederacy in the late 1800s. It encourages people to consider the impact of their actions on the next seven generations, roughly 150 years into the future. The goal is to ensure a sustainable world by making decisions in the present that benefit future generations.

GENERATIVE TEACHING


This coincidence of wording with generative AI helped me make what I felt was a worthwhile conclusion:

The answer to the problem or challenge of generative AI in education is generative teaching. 


That is, remembering the better angels of our education nature and thinking about how to integrate the intellectual challenges and opportunities of AI for personal education stimulation and growth in education rather than to try and guard and protect from it. This we do by helping the students become self-directing through a familiarity and an understanding of the technology.


So I'm asking that we think generatively about the use of AI in education. How can we help students understand and use these amazing new tools in a way that lights the fires of their intellectual curiosity and growth, rather than just filling the pails through of traditional instruction and assessment? The burden of this generative education model is that we have to become capable enough ourselves with an understanding AI in order to manage the process. As with all real endeavors to help others, we realize it's more about who we are in the process than anything else.


I know this is something of a Sisyphean task, since the same basic dynamics and forces and likely inevitable outcomes that have existed in previous Ed tech reform cycles are still in play today, even and maybe especially because the stakes are higher. 

SCIENCE FICTION AND LEARNING


Tellingly, we have different visions of education in the future that are represented in some of our science fiction movies. Vulcan pods in Star Trek with individual and isolated, and likely AI-driven, instruction. Or “jacking in” and downloading informational and physical competencies, like in The Matrix. Or Socratic teaching in nature portrayed in Serenity


Science fiction also models for us our competing visions beyond education and toward utopia: Star Trek shows a peaceful enlightened society, Brave New World gives us drug-induced compliance, and 1984 and The Hunger Games give a view of totalitarian control 

THE STAKES


In the sci-fi movie, Lucy, where massive ingestion of a brain-expanding chemical gives Scarlett Johansson’s character complete understanding and access to all physical laws, the Morgan Freeman character, Professor Norman, says in reference to complete knowledge: "I'm not even sure that mankind is ready for it. We're so driven by power and profit. Given man's nature, it might bring us only instability and chaos.”


This and the other fictional portrayals we've discussed, which are attempting to understand how human nature and our future will play out, and seem closer than they have ever been. When a Scarlett Johansson-like voice laughed and flirted in that intentionally memorable OpenAI demo just over a month ago, some of us felt jolted out of a comfortable vision of AI as a robotic assistant into the emotional-vulnerable world of the movie Her, or maybe even the darker Ex Machina. In both movies the AI  understands our human emotional and irrational makeup so well, and is capable of meeting those needs so precisely, that it ( or those who control it) have a power over us that inevitably plays out regardless of our concerns.


I keep going back to Kevin Kelly’s book What Technology Wants. Worth looking at if you don’t know it.


I do think it’s somewhat inevitable that as we move from Artificial General Intelligence to Artificial Superior Intelligence that we will task AI to help us improve AI itself, and that it’s possible that the progress will be cascadingly quick and create a watershed moment.


We do have a choice, though. We can try to carefully assess the impact of the technology in our lives and, like the Amish, determine where and how it enhances our core beliefs and where it doesn't, and then make conscious decisions for it's use. But honestly, this is very hard to do.


It is up to us to decide how much we want to talk about this. There is a degree to which the paradox of education is fairly well understood but not really talked about, since maybe we just accept the benefits of traditional schooling without really wanting to look at its costs. Maybe we understand that the act of reimagining school is greater than the political and practical will that it would take to do so. 


Somehow we are comfortable that another staple of our human existence, food, is delivered to us in a great variety of ways, from large-scale industrial-style distribution chains to small local diverse restaurants and even food trucks. We would probably never consider sending our children to feeding stations three times a day to get the kind of standardized fare we're comfortable with in schooling, which is a curiosity to me but maybe understandable. Are we so afraid to have that level of personal responsibility for something so important that we let others decide for us?


I learned a great lesson when my youngest daughter took AP World History and needed my help nightly with the reading. I was shocked to find how obvious it was to me as an adult reading the massive textbook that the history of the world is primarily a history of power and control, even though we tell different secondary stories to ourselves. 


A few years back I read a book called The Elephant in the Brain. Although I didn't agree with many of the conclusions of the book, I did leave with an understanding that almost all of our social narratives around institutions are stories that we’re comfortable with but which aren't actually the truth. I say this sincerely: I think one of the great dilemmas of our post-internet and now early AI epoch is that I'm not sure we're actually ready to handle the truth. Right now large language models are language but not logic, however very smart people are working hard on AI’s ability to reason. Were we to have a fully-reasoning AI and the ability to research culture and history logically, not emotionally, and to see past marketing and propaganda and commercial and political interests to more truthful understandings of cause and effect, it would dramatically change our perceptions of the world. (Which is why I’m also concerned about who has the incentive and motivation to control AI development.)


So if an AI Skynet takeover moment  (Terminator movies)  doesn’t occur, and we haven't been lulled into an artificial intimacy stupor by Sky or her equivalents (SKy - SCarlett, but that’s just a coincidence, right?), and we do soon get AI with superior reasoning and propaganda-breaking capability, we will need a framework of generative teaching so that the next generations are in partnership with us, working to understand these new and powerful changes in our world.