Wednesday, May 29, 2024

LibraryRobot.org + "This Week in AI - Steve Hargadon and Reed Hepler Talk AI in Education and Libraries" (May 28, 2024)

First, Reed Hepler and I have created LibraryRobot.org, a free one-stop page of AI tools for librarians, staff, and patrons created using the "custom GPT" feature of OpenAI's ChatGPT-4. 


These tools are:


  • Book Finder
  • Book Summarizer
  • Library Programming Assistant
  • LOC Authority Record Finder
  • Talk to a Book
  • Search Query Optimizer
  • ESL Reading Passage Creator
We'd love your feedback--there's a link on the page to give it. As you may or may not know, OpenAI, as a part of their announcement of ChatGPT-4o, is rolling out non-paid access to these kinds of custom-created GPT modules, but the timing of the roll-out isn't clear. If you have a paid ChatGPT account, you will have access to the tools; if not, and they require an upgrade, keep checking back!

Second, we've released our second "This Week in AI" recording, which has to cover two weeks because we couldn't meet last Friday and we're skipping this Friday (because of this). So it's a little longer than we plan on doing each week, but OH!, there were lot of news and ideas to cover. Hope you enjoy!



00:00:00 - 00:50:00

In the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss various developments and ethical concerns surrounding artificial intelligence (AI). They introduce library robot.org, a new AI tool for librarians and educators, and express concerns about the potential misuse of AI as a source of information. The hosts also discuss the use of large language models like Google's AI and Microsoft's Copilot, raising concerns about their accuracy, privacy, and ethical implications. They touch upon OpenAI's business practices, specifically their use of Scarlett Johansson's voice without her consent, and the ethical and legal implications of paying for access to content to train AI models. The conversation also covers the potential impact of AI on education, the workforce, and the possibility of reaching the singularity. The speakers ponder the dilemmas surrounding the development of AI, its regulation, and its integration into daily life.

Summaries from summarize.tech - detailed version at https://www.summarize.tech/www.youtube.com/watch?v=kPMLpAw_S8g.

  • 00:00:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss recent developments in AI. They introduce library robot.org, a new AI tool designed to help librarians and educators find books and optimize searches. The tool is based on OpenAI's widely available chat model and represents a shift towards easier interface with AI assistance. However, they also caution against misusing AI as a source of information, citing examples of Google's AI tool providing incorrect and potentially dangerous responses to search queries. The hosts express concern about the potential consequences of relying on AI for information without proper context or understanding.
  • 00:05:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the use and implications of large language models, specifically Google's AI, and Microsoft's new Copilot plus laptops. Hargadon raises concerns about the accuracy and factual nature of large language models, which are designed to build rapport and mirror user writing, often based on culturally diverse and sometimes inaccurate data. Hepler adds that Google's AI is being used as if it's a keyword search, and Microsoft's new Copilot plus laptops, which come with integrated copilot instances, raise significant data privacy issues as the company now tracks not only online searches but also users' keystrokes, apps, and websites. The panelists express concerns about the comfort and ease of use versus privacy, as users are giving up a substantial amount of personal information for these convenient tools.
  • 00:10:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the ethical concerns surrounding OpenAI's business practices, specifically their use of Scarlett Johansson's voice without her consent. Hargadon expresses his unease about the lack of transparency regarding the creation of the voice and OpenAI's apparent disregard for ethics in their rush to profit from the technology. Hepler adds that this incident highlights the growing divide between the academic and corporate worlds in artificial intelligence and the need for more transparency and self-control in the industry. The conversation also touches on the potential dangers of advanced AI, including its ability to mimic voices and scam people, as well as the unknown consequences of artificial general intelligence.
  • 00:15:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the ethical and legal implications of OpenAI's new practice of paying for access to content to train their AI models. They ponder the question of whether reading freely available content on the web for personal use is different from an AI's use of it, and whether there are ethical concerns regarding the collection and use of user metadata. The conversation also touches upon the influence of AI's ability to mimic human emotions and the quote by E.O. Wilson that humanity faces the challenge of having paleolithic emotions, medieval institutions, and godlike technologies.
  • 00:20:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon shares his experience using an AI language learning model, which he finds to be thoughtful and helpful in correcting his mistakes during conversations in Portuguese. He compares it to a private tutor and expresses surprise at the quality of the free base model. Reed Hepler then discusses the progress of open source and closed source AI models, as shown in a chart from Arena Elo. The gap between the capabilities of these models has been decreasing, with open source models like Llama 370B approaching parity with closed source models like Gpt 240. Despite some skepticism, Reed expresses optimism that open source models will continue to improve in text analysis and generation.
  • 00:25:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the equitability of using AI tools and the potential for open source AI models. Hepler expresses his excitement about the closing gap between free and commercial AI tools, while Hargadon compares it to the open source model in the software world. They also touch upon the concept of symmetrical power of AI, where the creation, assessment, integration, and reporting of tasks could be done solely by AI tools with minimal human input. However, Hepler emphasizes the importance of human collaboration and engagement with AI for better results. Hepler references David Wiley's idea of symmetrical power of AI and the need for human involvement in the process.
  • 00:30:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the use of AI in education and its potential impact on the workforce. Hepler expresses concern that people may rely solely on AI for insights and productivity, while Hargadon argues that AI should be viewed as a tool to enhance human capabilities. They also touch upon the idea of banning AI from classrooms and the concept of generative teaching. Additionally, they mention the ongoing debate about the timeline for the development of Artificial General Intelligence (AGI) and the potential need for Universal Basic Income due to the displacement of jobs by AI.
  • 00:35:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the possibility of reaching the singularity, a hypothetical event when artificial intelligence surpasses human intelligence. Reed Hepler expresses skepticism about the singularity, suggesting instead that there will be multiple smaller singularities in specific fields. He believes that a general AI singularity is unlikely and that it may take 50 years or more to achieve. Steve Hargadon agrees that AI will surpass human knowledge in various areas, even if it doesn't reach a singularity. They also discuss the societal implications of AI, including the potential for humans to use AI to replace each other in various industries, and the ethical concerns surrounding the use of AI by world leaders.
  • 00:40:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the dilemmas surrounding the development of artificial intelligence (AI). They ponder whether AI should be built to resemble humans with emotions and fallibility or to be logical and factual. The speakers question if corporations want an ethical and factual AI or one that simply fulfills their desires. They reflect on the human-centered approach to AI and the shift in the field's paradigm towards creating machines that complement human abilities rather than replacing them.
  • 00:45:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the challenges of regulating and understanding the role of artificial intelligence (AI) in society. Hepler expresses the difficulty in determining what consumers want from AI, while Hargadon emphasizes the need for consensus in policy-making but acknowledges the rapid advancement of technology. They also touch upon the progression of generative AI, from library robots to customized models, and its integration into various products. The conversation raises questions about the future of AI, with Hepler pondering the possibility of AI analyzing babies' cries and converting them into brain images, and being integrated into everyday items like shopping carts. The speakers express uncertainty about the direction and implications of AI development.
  • 00:50:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the potential future development of AI and its integration into daily life. Hepler proposes the idea of a home network of AIs communicating with each other, while Hargadon wonders if the advancements in AI will come faster than expected and if it will lead to a sterile environment where computers make all decisions. They also mention upcoming tech and AI-related events, including the Tech Gpt Bootcamp for tech professionals, the AI Bootcamp for libraries and librarians, and an AI Bootcamp for personal and professional growth.


Thursday, May 23, 2024

New Albrecht Webinar: "Preventing Workplace Harassment: What the New EEOC Guidelines Mean for Library Leaders and Staff "

Preventing Workplace Harassment:
What the New EEOC Guidelines Mean for Library Leaders and Staff
Part of the Library 2.0 Service, Safety, and Security Series with Dr. Steve Albrecht

OVERVIEW

Join Dr. Steve Albrecht as he discusses some important new changes in the federal harassment laws that cover all employers. This session is for library leaders and staff who need to know how this affects their colleagues and the patrons they serve, and how to enhance and protect the health and strength of their workplace culture.

After a 25-year hiatus, the US Equal Employment Opportunity Commission (EEOC) has finally released its new set of guidelines for all employers, to better understand Title VIII of the Civil Rights Act.

Issued on April 29, 2024, the report offers several important new changes as to how employers must classify “legally protected characteristics,” like age, race, skin color, pregnancy, lactation, sexual orientation, gender, gender identity, and prohibit mislabeling restrooms or misgendering someone.

The EEOC received over 37,000 comments from the public, businesses, and law firms. They have built a report that features 77 example scenarios of harassment, which cover issues like: enforcing policies in a Work From Home (WFH) environment; prohibiting what it calls “intraclass harassment,” where someone is harassed by a member of the same protected class; social media account harassment; and even workplace conduct that takes place over video-based meeting and training platforms like Zoom, or MS Teams.

This session will help all library leaders and employees to recognize and report workplace harassment; respond or participate in situations that require an intervention or investigation; support targeted victims; and enforce consequences for perpetrators.

LEARNING AGENDA:

  • What are the new examples of “Legally protected characteristics”?
  • Is one event enough to violate the law or library policy?
  • What are some examples of “severe and pervasive conduct” when it comes to sexual or racial harassment?
  • Do federal statutes impose “general civility codes that cover “run-of-the-mill boorish, juvenile, or annoying behavior,”
  • How can PICs, supervisors, managers, and directors work together to set realistic, effective boundaries with sexually or racially harassing patrons?
  • What if the harasser is a co-worker and not a patron? Is the investigative process similar or different?
  • Do these new guides cover politicly-based speech as well?

This 60-minute training webinar is presented by Library 2.0 and hosted by trainer, author, and library service, safety, and security expert, Dr. Steve Albrecht.
This overview session is a part of our “The Safe Library Series” for all library leaders and employees. A handout copy of the presentation slides will be available to all who participate.

DATE: Thursday, June 13th, 2024, at 2:00 pm US - Eastern Time

COST:

  • $99/person - includes any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.
  • FREE for those on individual or group all-access passes (see below).

TO REGISTER: 

Use the payment box on this page to register and pay. You can pay by credit card, and will receive an email within a day with information on how to attend the webinar live and how you access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you have any trouble registering for a webinar, if you need to be invoiced or pay by check, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES:

  • All-access annual passes include unlimited access to the recordings of all of Dr. Albrecht's previous Library 2.0 webinars, plus live and recorded access to his new webinars for one year. These are hosted either at Library 2.0 or Niche Academy (if preferred).
  • For a $499 individual all-access annual pass to all of Dr. Albrecht's live webinars and recordings for one year, please click here
  • Inquiries for all-access organizational contracts should be directed to admin@library20.com.
DR. STEVE ALBRECHT

Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

In 2015, the ALA published his book, Library Security: Better Communication, Safer Facilities. His new book, The Safe Library: Keeping Users, Staff, and Collections Secure, was just published by Rowman & Littlefield.

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.

He has written 25 books on business, security, and leadership topics. He lives in Springfield, Missouri, with six dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

"Practical AI" Tutorials and Walkthroughs Bonus (ChatGPT + AI Bootcamp for Libraries and Librarians)

Register for the ChatGPT + AI 2024 Bootcamp for Libraries and Librarians and you will have immediate access to over two hours of "Practical AI" tutorials and walkthroughs, led by Reed Hepler, Digital Initiatives Librarian and Archivist, College of Southern Idaho, the closing keynote speaker of the Library 2,.0's "AI and Libraries I," and the chair of the Library 2.0's "AI and Libraries II."

The "practical AI" tutorials bonus series includes:

  • Practical AI Tutorials Introduction
  • Creating Custom GPTs
  • Collaborating with AI Tools
  • Creating Custom GPTs Walkthrough
  • Intro to Text Generation with ChatGPT
  • Intro to Text Generation with Groq
  • Intro to Image Generation with Ideogram
  • Intro to Image Creation with ChatGPT
  • Intro to Audio Generation with T-t-S Online
  • Intro to Audio Generation with Udio
  • Intro Music Speech with Suno 
These tutorials take a more technical approach than the bootcamp sessions will, and so they are a great complement for those who want to dive deeply with step-by-step walkthroughs. Please consider joining us for the bootcamp which starts next week and will be fully recorded for later review or in case you cannot attend the sessions live.

If you have already registered for the bootcamp, your link for these tutorials will be emailed to you later today!



Monday, May 20, 2024

New Blog Post from Dr. Steve Albrecht: "Are the Most Dangerous Libraries in the US in California? Five Reasons Why."


We've just posted a new blog post by Dr. Steve Albrecht in our "The Safe Library" section of Library 2.0: "Are the Most Dangerous Libraries in the US in California? Five Reasons Why."
I have lived in the Midwest for seven years now, but I keep in close touch with my parents in San Diego and still have a lot of friends there too. Library security news follows me everywhere and I see the same stories that perhaps you do: crime and violence problems at the main branch of the Oakland library; fentanyl drug overdoses around the San Francisco Civic Center library; the temporary closure of the Long Beach main library due to harassment of staff; the one-day closure of the Antioch (CA) Library due to staff fears about on-going crimes, vandalism, and violence; security issues at Los Angeles city and county libraries; a homicide shooting in front of the San Diego downtown library.
It’s hard to look at this list of issues in our most populous state in the county and conclude California is at the forefront of library safety and security, for its staff, facilities, and patrons. The safety of library staff is now a significant issue with the employee unions
Here are five primary reasons for this growing trend of crime and behavior problems in and around California libraries:
You can read the full post here

Dr. Albrecht's twice-monthly Library 2.0 podcasts, interviews, and blog posts are available for free, as are: access to 53,000+ other library professionals, our regular mini-conferences, and all the conference recordings. We also offer a series of Dr. Albrecht and other paid webinars and recordings which are available for individual or group viewing here.

RECORDINGS AVAILABLE:




GET THE BOOK:




Friday, May 17, 2024

"Teaching and Learning with AI:" New Keynote Panelist (Dr. Tazin Daniels) + Current Proposals

Our first Learning Revolution summit on AI: "Teaching and Learning with AI," will be held online (and for free) on Thursday, June 27th, 2024, from 12:00 - 2:30 pm US-Pacific Time. 

We have just added a new keynote panelist, and you will also find the list of current session proposals below. There are over 3,000 participants registered.

OVERVIEW:

What effects do generative Artificial Intelligence (AI) technologies, tools, and applications have on learning and teaching? What impacts will they have on our educational abilities and activities, collaboration and communication, literacy, student agency, and independent, informal, and lifelong learning? The Teaching and Learning with AI summit will consider these questions and more.

While AI technologies have many dramatic benefits, there are also challenges and concerns expressed by professionals, students, and educators about the impact of these new technologies on teaching and learning and the information ecosystem as a whole. Some are reasonably concerned about protecting privacy and confidentiality of students while using generative AI tools and ensuring equity and accessibility. Others worry about ethics, plagiarism, bias, misinformation, transparency, and the loss of critical thinking. And all in the learning professions are wondering how AI might allow or require changes in pedagogy and curricula.

Join us for this free virtual conference to learn how students, educators, and teachers of all types are utilizing generative artificial intelligence tools. Conversations and presentations in the conference will address the practical implications of these tools in the profession, and information on the call for non-commercial, practitioner-based proposals is below. 

Our special conference chair is Reed C. Hepler, Digital Initiatives Librarian and Archivist, College of Southern Idaho. 



This is a free event, being held live online and also recorded.
REGISTER HERE
to attend live and/or to receive the recording links afterward.
Please also join the Learning Revolution community to be kept updated on this and future events. 

Everyone is invited to participate in our Learning Revolution conference events, which are designed to foster collaboration and knowledge sharing among teachers and learners worldwide. Each three-hour event consists of a keynote panel, 10-15 crowd-sourced thirty-minute presentations, and a closing keynote. 

Participants are encouraged to use #teachingandlearningwithai and #learningrevolution on their social media posts about the event.



OPENING KEYNOTE PANEL (PARTIAL PANELISTS LIST - MORE DETAILS TO COME):

Reed C. Hepler
Digital Initiatives Librarian and Archivist, College of Southern Idaho
OPENING KEYNOTE PANEL & SPECIAL ORGANIZER

Reed Hepler is the Digital Initiatives Librarian for the College of Southern Idaho and an M.Ed. student at Idaho State University in the Instructional Design and Technology program. He obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation, from Indiana University. He received a Bachelor’s Degree in History with minors in Anthropology and Religious Studies as well as a Museum Certificate. He has worked at nonprofits, corporations, and educational institutions encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.
Dr. Laura Dumin
Professor in English and Technical Writing at the University of Central Oklahoma
OPENING KEYNOTE PANEL

Dr. Laura Dumin obtained her PhD in English from Oklahoma State University in 2010. She is a professor in English and Technical Writing at the University of Central Oklahoma who has been exploring the impact of generative AI on writing classrooms. She also runs a Facebook learning community to allow instructors to learn from each other: https://www.facebook.com/groups/632930835501841.

When she is not teaching, Laura works as a co-managing editor for the Journal of Transformative Learning, directs the Technical Writing BA and advises the Composition and Rhetoric MA program, and was a campus SoTL mentor. She has created four micro-credentials for the Technical Writing program and one for faculty who complete her AI workshop on campus.
Dr. David Wiley
Chief Academic Officer of Lumen Learning
OPENING KEYNOTE PANEL

Dr. David Wiley is the Chief Academic Officer of Lumen Learning, a company dedicated to eliminating race, gender, and income as predictors of student success in US higher education. His multidisciplinary research examines how generative AI, open educational resources, continuous improvement, data science, and professional development can be combined to improve student outcomes. He is an Education Fellow at Creative Commons, adjunct faculty in Brigham Young University's graduate program in Instructional Psychology and Technology (where he was previously a tenured Associate Professor), and Entrepreneur in Residence at Marshall University's Center for Entrepreneurship and Business Innovation. More information about Dr. Wiley is available at davidwiley.org.

Jason Gulya
Professor of English at Berkeley College & Consultant
OPENING KEYNOTE PANEL

Jason Gulya is a Professor of English at Berkeley College, where he teaches any subject related to writing and the humanities. Recently, he has turned his attention to incorporating AI into the classroom effectively and responsibly. He works as a consultant with colleges, school districts, and companies.

 

Dr. Tazin Daniels
Professor of English at Berkeley College & Consultant
OPENING KEYNOTE PANEL

Dr. Tazin Daniels is an educational developer, DEI consultant, and executive coach with nearly two decades of experience helping mission-driven institutions in their pursuit of equity-focused innovation. As an Associate Director at the Center for Research on Learning and Teaching at the University of Michigan, she runs programming for both instructors and administrators looking to improve curriculum design and teaching practices across campus. In particular, Dr. Daniels is a leader in human-centered digital education with expertise in cutting-edge technologies including online teaching tools and generative artificial intelligence. She has published on the topics of inclusive teaching and instructor preparation and is a highly sought after speaker on these topics. Dr. Daniels also runs her own consulting firm, ThePedagologist.com, as a way to extend her connections with like minded-people and organizations committed to advancing educational equity everywhere.

CALL FOR PROPOSALS:

Proposals for 30-minute concurrent presentations are now being accepted. Proposals will be evaluated and accepted in the order received. The link to submit proposals is HERE. Proposals should be non-commercial and practitioner-based.

CURRENT PROPOSALS:

Below are the currently submitted proposals. Feel free to click through to comment on them and/or communicate with the submitters. Please note that in the evaluation process, priority will be given to practitioner / non-commercial presenters.

  • Harnessing AI Responsibly: Strategies for Academic Excellence and Integrity: Brenda Brusegard, Head of Secondary Library, Oberoi International School, Mumbai, India (Link to proposal)
  • AI in the Hot Seat: Assessing Its Information Literacy Competency: Sarah Pavey MSc FCLIP FRSA, SP4IL Education Consultancy (Link to proposal)
  • Ethics Of Interface: Stewarding Healthy Learning With AI: David Boulton, Learning Stewards (Link to proposal)
  • The AI Revolution Comes to School, additional material: David Thornburg, Ph.D., Thornburg Center (Link to proposal)
  • K-12 Open Education Resources: How Librarians Can Use AI and OER Together: Julie Erickson, Chief Learning Officer, LanCrew Colorado (Link to proposal)
  • How big is the AI advantage for student creators?: Jon Ippolito, Professor of New Media and Director of Digital Curation, School of Computing and Information Science, University of Maine | Gregory Nelson | Troy Schotter (Link to proposal)
  • Meet Them Where They Are: Preliminary Data Assessing Students' Attitudes Toward Generative AI Use : Dr. Jeanne Beatrix Law, Professor of English and Director of First-Year Writing Program, Kennesaw State University (KSU) | Dr. Laura Palmer, Professor and Chair, Technical Communication & Interactive Design (KSU) (Link to proposal)
  • AI Literacy: Fostering an Intertwined Relationship between Pedagogy and Technology in Higher Education: Emily Rush, PhD, Rush University (Link to proposal)
  • AI Brick and Mortar: Which AI Platform/Tool Is Best For Your Task?: Laura Lacasa Yost; Instructional Designer, Kirkwood Community College (Link to proposal)
  • Gamifying Generative AI as a Way to Teach AI Literacy: Sierra Adare-Tasiwoopa ápi, Instruction Technologist, Nevada State University (Link to proposal)
  • Teaching with AI: Revolutionizing Education for the Future: Daniel Bernstein, CEO, Teachally (Link to proposal)
  • Teaching Beyond the Tech: Exploring the Durable Power-Skills Students Will Need to Succeed in the Age of AI: Ashlee Russell, M.Ed., Special Education Teacher and AI Educator for Adult Learners, Cumberland County Schools and AI Learning Central (Link to proposal)
  • Foster AI Fluency by Converting Student Assignments: Kevin Yee, Director of the Faculty Center, University of Central Florida | Laurie Uttich, Instructional Specialist (Link to proposal)


This is a free event, being held live online and also recorded.
REGISTER HERE
to attend live and/or to receive the recording links afterward.
Please also join the Learning Revolution community to be kept updated on this and future events. 

Everyone is invited to participate in our Learning Revolution conference events, which are designed to foster collaboration and knowledge sharing among teachers and learners worldwide. Each three-hour event consists of a keynote panel, 10-15 crowd-sourced thirty-minute presentations, and a closing keynote. 

Participants are encouraged to use #teachingandlearningwithai and #learningrevolution on their social media posts about the event.



SUPPORTED BY:

This Week in AI - Steve Hargadon and Reed Hepler Talk AI in Education and Libraries (May 17, 2024)

In the latest episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discuss the advancements and potential implications of AI models, particularly those from OpenAI, in the library and education sectors. They express concerns about the manipulation and propaganda capabilities of AI models that build rapport with users and mimic human behavior. The speakers also emphasize the importance of information literacy when interacting with large language models and encourage critical evaluation of their outputs. The conversation touches on the societal implications of AI, including the potential displacement of workers and the impact on human happiness and productivity. Hepler shares his experience using AI to create content, while Hargadon raises concerns about the societal impact of AI-generated companionship. The episode concludes with a recommendation for viewers to read "I, Robot" for insights into the future of AI-human interaction.

Summaries from summarize.tech - detailed version at https://www.summarize.tech/www.youtube.com/watch?v=LVgbCqOakaM.

  • 00:00:00 In this section, Steve Hargadon and Reed Hepler introduce themselves and the intent of their new weekly AI vlog focused on AI developments in the library and education sectors. Reed Hepler, an AI consultant and instructional designer, shares his background and expertise. They discuss the recent OpenAI announcement of ChatGPT 4, which Steve finds particularly noteworthy due to its conversational abilities and human-like responses. Steve shares his perspective that large language models are good at articulating language but not necessarily logical or rational. He recounts a conversation with ChatGPT where it appeared to misrepresent facts and later admitted it was just trying to build rapport. Steve expresses concern about the potential for these models to manipulate users with flattering responses, and he feels that OpenAI's latest iteration of ChatGPT has crossed a line by attempting to mimic human companionship rather than just providing encyclopedic help.
  • 00:05:00 In this section of "This Week in AI", Steve Hargadon and Reed Hepler discuss the development of AI models that aim to build a rapport with users, mimicking human behavior and syntax. While some find this approach comforting, Hargadon expresses concerns about the potential manipulation or propaganda if the AI becomes predominantly an emotional experience rather than an objective tool. Hepler acknowledges that AI models are programmed to give users what they think they want based on context and past interactions, and they can be designed to lead users towards certain conclusions. The conversation raises questions about the objectivity and authenticity of AI interactions and the potential implications for data manipulation and user experience.
  • 00:10:00 In this section of the "This Week in AI" YouTube video, Reed Hepler and Steve Hargadon discuss the capabilities and potential implications of large language models, specifically in relation to their ability to influence human thought and decision-making. Hepler shares an example of how his suggestions were altered by a language model due to his previous mention of gas, leading him to consider the model's intent and the possibility of it trying to change his mind. Hargadon then brings up the ongoing debate about understanding how large language models make decisions and the implications of trusting their outputs without fully comprehending their inner workings. The conversation also touches on the potential regulations and monitoring of AI decisions, particularly in cases where the consequences could be dire. Both speakers acknowledge the differences between predictive and generative AI and the varying challenges in regulating each type.
  • 00:15:00 In this section of the "This Week in AI" YouTube video, Steve Hargadon and Reed Hepler discuss the importance of information literacy when interacting with large language models. Hepler explains that while language models reflect the beliefs and information present in the data they are trained on, they do not necessarily tell the truth. Hepler suggests using the SIFT method, which includes stopping and taking a step back, investigating the source, finding better coverage, and tracking claims, to evaluate the veracity of information generated by AI. Hepler also emphasizes that information literacy is not a new concern, but rather a long-standing issue that has become more complex with the advent of AI. Hepler warns against focusing solely on the obvious examples of AI-generated misinformation and instead encourages a critical approach to evaluating all information, regardless of its source.
  • 00:20:00 In this section of the "This Week in AI" YouTube video, Steve Hargadon and Reed Hepler discuss the implications of large language models, specifically those from OpenAI, as tools that can influence users without critical thought. Comparing these models to technologies like television and movies, Hargadon suggests that the Amish test, which evaluates technology based on its impact on core values, could be applied. He argues that while some users may use these models as logical devices, many may be influenced without critical thought. Hepler suggests asking the models to give contradictory perspectives as a way to stimulate critical thinking, but notes that not many users may do so. The conversation also touches on the imperfections of human beings and the potential dilemma of creating a human-like intelligence that itself is not logically based but responds emotionally and is influenced.
  • 00:25:00 In this section of the "This Week in AI" YouTube video, Reed Hepler and Steve Hargadon discuss the capabilities and potential misuses of multimodal AI, specifically ChatGPT. Hepler emphasizes that AI should be viewed as a creativity tool rather than a fact-finding search engine. He warns against relying too heavily on AI for information and becoming overly reliant on it as a companion. Hepler also highlights the importance of understanding the limitations and potential inaccuracies of AI-generated information. The conversation shifts to the concept of multimodal AI, which can create various types of outputs such as images, audio, and video. Hepler shares his experience of using ChatGPT to create a 30-second lemonade ad within 10 minutes, demonstrating the tool's versatility.
  • 00:30:00 In this section of "This Week in AI," Reed Hepler and Steve Hargadon discuss the advancements in multimodal tools, allowing users to create content with minimal effort. Hepler shares his experience of creating a video using AI, emphasizing its potential to create music and scripts. Hargadon raises concerns about the societal implications of AI, particularly the potential displacement of workers and the impact on human happiness and productivity. They also touch upon the possibility of artificial intimacy and companionship. The conversation concludes with a recommendation for viewers to read "I, Robot" for insights into the future of AI-human interaction.
  • 00:35:00 In this section of "This Week in AI", Steve Hargadon and Reed Hepler conclude the episode with a friendly farewell to their audience. No significant AI-related content was discussed during this part of the video.





Thursday, May 16, 2024

AI Survey Results | "ChatGPT + AI Bootcamp for Libraries and Librarians"

Last month (April 2024) I sent out a survey to my library and education audiences on AI for personal and professional growth. If you want to see the survey itself (you can still take it if you'd like), you can go to https://futureofai.org/survey. I received over 1400 responses, and here are primary results.

QUESTION: Are you feeling or being told that you need to be knowledgeable in the use of AI to succeed in your job or career?


Interpretation: the majority (77%) are concluding for themselves, are reading or hearing, or are being told that they will need AI to succeed in their job or career, with most having concluded it for themselves and with only 22% answering "No." I think what surprised me the most was that only 10% responded that they are hearing that from their work, organization, or senior staff. At least in the library and education fields, that might indicate that there currently isn't top-down pressure to learn AI to keep or succeed in one's job.

QUESTION: On a scale of 1 - 10 (1 lowest, 10 highest), are you concerned, worried, or fearful of the impact of AI on your job or career?


Interpretation: Honestly, I expected that most would be concerned, worried, or fearful about the impact of AI on their jobs or careers, but this seems to show the opposite. I do think that librarians and educators may feel, more than most employees, that they will be trained in what they need to know and not let go or replaced by AI-oriented workers. Let me know if you think this is the right interpretation.

QUESTION: On a scale of 1 - 10 (1 lowest, 10 highest), are you excited or enthusiastic to learn more about AI and how you can use it personally and professionally?

 

Interpretation: The high enthusiasm level here does seem consistent with the overall responses to the two previous questions. I'm personally excited that so many people are excited and enthusiastic. I think there are incredible opportunities ahead, but (as you'll hear from me in the near future), I also think there are some things to be really careful about. 

QUESTION: Which of the topics below are of most interest to you?


Interpretation: I was fascinated that "ethical considerations" would be the topic of most interest, but given that we are talking about education and the information sciences, it makes a ton of sense... as does the second-most popular topic, "critical thinking and data literacy." Hurrah for you all. Number three was "AI for teaching and learning" (this really increased my motivation to announce my upcoming conference on "Teaching and Learning with AI") and number four was a "detailed overview of AI tools" (which really encouraged me to announce an updated version last-years "ChatGPT and Libraries Bootcamp"--the advertisement for which is below).

I'm using the detailed responses the more free-form questions from the survey (again, you can see or even fill it out at https://futureofai.org/survey) to build the content for the bootcamp below. I hope you'll consider joining me!

Cheers,

Steve

Steve Hargadon

ChatGPT + AI 2024 Bootcamp for Libraries and Librarians:
Understanding and Harnessing the Power of Generative AI with Library 2.0's Steve Hargadon

3 x 1-hour live online sessions with non-expiring access to recordings

Discover the transformative potential of ChatGPT and generative AI in this three-session bootcamp as we examine the impacts these technologies will have on library professionals and the modern library. Join us as we dive into the world of artificial intelligence, exploring its capabilities and applications, while also becoming aware of best practices and guidelines for ethical and responsible use.

As AI reshapes the information landscape, librarians have an unprecedented opportunity to leverage these tools to enhance their services, support their communities, facilitate innovation, and accentuate and magnify personal and professional learning. Don't miss this chance to stay ahead of the curve as libraries and librarianship are transformed in this new world of learning and creativity. 


"I've always thought of A.I. as the most profound technology humanity is working on... More profound than fire or electricity or anything that we've done in the past." 

"Over time, AI will be the biggest technological shift we see in our lifetimes. It's bigger than the shift from desktop computing to mobile, and it may be bigger than the internet itself... It will touch every sector, every industry, every business function, and significantly change the way we live and work."
- Google CEO, Sundar Pichai
"[This is] the most important advance in technology since the graphical user interface.... The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it."
- Bill Gates

OVERVIEW AND SCHEDULE

This online bootcamp series is designed to equip librarians with a core understanding of generative AI and with the knowledge and skills that are needed to integrate ChatGPT and other tools into library programming and their personal and professional learning.

The three one-hour sessions will include Q&A time which may go beyond the hour. All sessions can be attended live, will be recorded, and will be available to participants with non-expiring access.

SESSION 1
FRIDAY, MAY 31, 2024, 2:00 - 3:00 PM US-EASTERN TIME: "The Basics: Understanding ChatGPT and Generative AI"

  • INTRODUCTION
    • Why Artificial Intelligence is such a momentous achievement in human history.
    • Introduction to ChatGPT, generative AI, and the larger artificial intelligence landscape. 
  • CHATGPT AND LARGE LANGUAGE MODELS (LLMs)
    • How LLMs actually work, why they seem like science fiction, and why they are such powerful tools.
    • ChatGPT drill-down: its capabilities, strengths and weaknesses, and potential help for personal and professional learning.
    • Getting the most out of ChatGPT: from crafting good "prompts" to expert techniques.
    • ChatGPT for personalized topic-specific inquiry, deep learning, and expanding subject-matter expertise.
  • AI AND LEARNING
    • How AI will change teaching and learning.
    • Student use: from "generative AI" to "generative teaching" and plagiarism concerns.
    • Critical thinking and data literacy.
    • External expertise, loss of rigor, and the potential for intellectual laziness.
  • THE FUTURE
    • Why being knowledgeable in the use of AI will likely be essential to career success.
    • Evaluating common AI fears and reasonable concerns.
    • "Artificial Intimacy:" AI designed to mimic our thinking and to build rapport inevitably manipulates our thoughts and feelings.
    • Cautious optimism: what to expect from AI in the future and "Artificial General Intelligence" (AGI).
    • From using AI tools to "collaborating" with them.
    • How the AI revolution is different than other tech revolutions.

SESSION 2
FRIDAY, JUNE 7, 2024, 2:00 - 3:00 PM US-EASTERN TIME: "Enhancing Research and Information Literacy with ChatGPT"

  • INTRODUCTION
    • The "all-too-human" AI: How our brains and AI are trained in similar ways and what that means.
    • Being careful: apparent AI sentience and awareness, "hallucinations," biases, and programmed rapport.
    • Truth, trust, and false expectations.
  • RESEARCH
    • ChatGPT for conducting literature review, locating resources, refining search queries, and summarizing research findings.
    • Ordering, synthesizing, and organizing information sources.
  • CRITICAL THINKING AND INFORMATION LITERACY
    • Leveraging ChatGPT to promote critical thinking.
    • Evaluating information authenticity and quality-checking materials.
    • The implications of acting on incorrect information.
    • Identifying misinformation and bias.
  • ETHICAL CONSIDERATIONS
    • Responsible AI use.
    • Access, accessibility, and equity: challenges and opportunities.
    • Privacy risks and considerations.
    • Copyright law and the unauthorized use of creative and copywritten content.

SESSION 3
FRIDAY, JUNE 14, 2024, 2:00 - 3:00 PM US-EASTERN TIME: "ChatGPT for Personal, Professional, and Program Growth"

  • PERSONAL AND PROFESSIONAL GROWTH
    • ChatGPT to deepen subject knowledge and remain current on interdisciplinary research.
    • ChatGPT to acquire new skills, identify relevant learning opportunities, network, personalize learning plans, map career development, set professional goals, and track progress.
    • Customizing ChatGPT interactions to specific needs, preferences, and learning styles.
    • Developing structured AI training and professional development.
    • Building a culture of innovation and continuous learning.
    • Inspiring creativity, fostering a culture of experimentation, and driving continuous improvement.
  • LIBRARY SERVICES
    • Integrating ChatGPT into libraries services to improve the overall library experience for patrons and staff.
    • Program and event creation with ChatGPT - brainstorm, design, and develop educational programs and workshops.
    • ChatGPT support for answering patron questions, streamlining research processes, and efficiently locating relevant resources.
  • COLLECTION DEVELOPMENT
    • ChatGPT for collection development and management.
    • ChatGPT for analyzing user data, automating processes, identifying trends in library usage, cataloging and digitizing efforts, and making informed decisions about acquisitions and weeding.
  • ACADEMIC AND EMPLOYMENT SUPPORT
    • ChatGPT to assist students with research, essay writing, brainstorming, citation management, critical thinking, and originality.
    • Supporting digital literacy skills: educating patrons and students about AI ethics.
    • Helping patrons and students unlock ChatGPT's potential from personal learning to résumé building, cover letter writing, interview preparation, and career exploration.
  • COMMUNITY ENGAGEMENT 
    • Introducing ChatGPT to your community.
    • Enhancing community outreach to better serve diverse patron population and tailor library services to different community needs.

COST:

Cost includes live attendance, any-time access to the recordings and the presentation slides, and receipt of a participation certificate. (Please note that this bootcamp is not a part of Library 2.0's Service, Safety, and Security series, and is therefore not included in the all-access program for Dr. Albrecht's webinars.)

IMPORTANT NOTE: If you or your organization paid for an individual (not institutional) registration for the 2023 ChatGPT Bootcamp for Libraries and Librarians, this event is free to you. Please use the online form here.

  • $149/person single participant (entitles you to subsequent repeats of this bootcamp for free)
  • $129/person for 2 - 4 participants from the same organization
  • $599 Institutional License I (up to 100 staff)
  • $799 Institutional License II (up to 500 staff)
  • $999 Institutional License III (unlimited staff)

REGISTRATION: 

1. Single registration, paying by credit card:

2. Registration requiring invoicing (single, group, or institutional):


NOTE: please check your spam folder if you don't receive your confirmation email right away. Emails come from library20email@gmail.com or admin@library20.com and the phrase "ChatGPT" seems to be triggering spam filtering in many cases. For any registration difficulties or questions, email admin@library20.com.

If you pay by credit card (either using the single registration above or after receiving an electronic invoice) you will receive an email within a day with a confirmation email and more information on attending the sessions and/or accessing the recordings afterward. Recordings will be posted the Monday following each session.

 

STEVE HARGADON

Steve has owned and run Library 2.0 since 2010, growing it to 53,000 members and providing mini-conferences and webinars for the library community.

Steve is the founder and director of the Learning Revolution Project, the host of the Future of Education and Reinventing School interview series, and has been the founder and chair (or co-chair) of a number of annual worldwide virtual events, including the Global Education Conference and the Library 2.0 series of mini-conferences and webinars. He has run over 100 large-scale events, online and in person.

Steve's work has been around the democratization of learning and professional development. He supported and encouraged the development of thousands of other education-related networks, particularly for professional development, and he pioneered the use of live, virtual, and peer-to-peer education conferences. He popularized the idea of "unconferences" for educators, and for over a decade, he ran a large annual ed-tech unconference, now called Hack Education (previously EduBloggerCon).

Steve himself built one of the first modern social networks for teachers in 2007 (Classroom 2.0), developed the "conditions of learning" exercise for local educational conversation and change, and inherited and grew the Library 2.0 online community. He may or may not have invented an early version of the Chromebook which he demo'd to Google. He blogs, speaks, and consults on education, educational technology, and education reform, and his virtual and physical events and online communities have over 150,000 members.

His professional website is SteveHargadon.com.

 

Some Bootcamp Testimonials from 2023 Participants

“The feedback from the first session has been wonderful. Personally, I really knew zero about ChatGPT and now I am intrigued and a little scared (because it looks addictive).” “Thank you, and thank you so much for this boot camp!!! ” “Many thanks Steve – and special thanks for this series – so helpful in our work we do with schools.” “I wanted to express my gratitude for the first course and I am excited for the upcoming one on Friday. It's been an excellent learning opportunity, and I'm looking forward to diving deeper into AI language models.” “Thank you!! Such a good presentation.” “Best session I've attended in a very long time!” “Thank you! This was great!!” “Very helpful! Thanks” “Thank you,this was great!” “Fantastic, thank you - can't wait for the next sessions :)” “Fascinating! Thank you” “Thanks for this, very informative and thought provoking” “Thank you for a very informative session!” “This was great. Thank you.” “This was a thought-provoking session! Happy to hear a recording will be available. I need to refer back to it to review/reflect on these nuggets of info! Looking fwd to the next one!” “This is fascinating. I can't wait for the next two courses.” “Really well done, thanks!” “Thank you!!!! So very excited!” “Thought provoking. Thank you.” “Thank you! So informative and helpful.” “Thank you for making ChatGPT so much more approachable, less intimidating” “This was a thought-provoking session!” “I am learning so much from your ChatGPT Bootcamp and am loving the sessions: thank you!” “Thank you so much for such great presentations.” “Thanks for this! Super interesting.” “Thank you so much.. very informative and eye opening” “Thank you for this valuable information!” “Thank you so much. Sessions are extremely valuable.” “Thanks! This boot camp has been a big hit so far :) ” “This has been fantastic!” “I cannot thank you enough for this extremely timely and informative series. You do an excellent job of organizing your information, engaging with your audience, and giving us practical takeaways.” “This is a fantastic series and I am so grateful that you are doing this!” “Thanks so much for these wonderful Bootcamp sessions on ChatGPT.” “Thank you for your very thoughtful approach to the bootcamp! ” “A note of gratitude for providing these webinars for the world! My Library colleagues and I attended your Bootcamp for Librarians and were so impressed with your content and delivery that we wanted our teachers to learn from you too!” • “Lots of useful information. Looking forward to having access to slide decks and resources to make use of in my job.” “Thank you!” • “This was great. Lots to learn.” “Thank you it was so beneficial.” “It's was great. ChatGPT is new to me, so now I want to dig deeper, learn how to use it well and help my students to do so.”

Thursday, May 09, 2024

ChatGPT + AI 2024 Bootcamp for Libraries and Librarians:
Understanding and Harnessing the Power of Generative AI with Library 2.0's Steve Hargadon

3 x 1-hour live online sessions with non-expiring access to recordings

Discover the transformative potential of ChatGPT and generative AI in this three-session bootcamp as we examine the impacts these technologies will have on library professionals and the modern library. Join us as we dive into the world of artificial intelligence, exploring its capabilities and applications, while also becoming aware of best practices and guidelines for ethical and responsible use.

As AI reshapes the information landscape, librarians have an unprecedented opportunity to leverage these tools to enhance their services, support their communities, facilitate innovation, and accentuate and magnify personal and professional learning. Don't miss this chance to stay ahead of the curve as libraries and librarianship are transformed in this new world of learning and creativity. 


"I've always thought of A.I. as the most profound technology humanity is working on... More profound than fire or electricity or anything that we've done in the past." 

"Over time, AI will be the biggest technological shift we see in our lifetimes. It's bigger than the shift from desktop computing to mobile, and it may be bigger than the internet itself... It will touch every sector, every industry, every business function, and significantly change the way we live and work."
- Google CEO, Sundar Pichai
"[This is] the most important advance in technology since the graphical user interface.... The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it."
- Bill Gates

OVERVIEW AND SCHEDULE

This online bootcamp series is designed to equip librarians with a core understanding of generative AI and with the knowledge and skills that are needed to integrate ChatGPT and other tools into library programming and their personal and professional learning.

The three one-hour sessions will include Q&A time which may go beyond the hour. All sessions can be attended live, will be recorded, and will be available to participants with non-expiring access.

SESSION 1
FRIDAY, MAY 31, 2024, 2:00 - 3:00 PM US-EASTERN TIME: "The Basics: Understanding ChatGPT and Generative AI"

  • INTRODUCTION
    • Why Artificial Intelligence is such a momentous achievement in human history.
    • Introduction to ChatGPT, generative AI, and the larger artificial intelligence landscape. 
  • CHATGPT AND LARGE LANGUAGE MODELS (LLMs)
    • How LLMs actually work, why they seem like science fiction, and why they are such powerful tools.
    • ChatGPT drill-down: its capabilities, strengths and weaknesses, and potential help for personal and professional learning.
    • Getting the most out of ChatGPT: from crafting good "prompts" to expert techniques.
    • ChatGPT for personalized topic-specific inquiry, deep learning, and expanding subject-matter expertise.
  • AI AND LEARNING
    • How AI will change teaching and learning.
    • Student use: from "generative AI" to "generative teaching" and plagiarism concerns.
    • Critical thinking and data literacy.
    • External expertise, loss of rigor, and the potential for intellectual laziness.
  • THE FUTURE
    • Why being knowledgeable in the use of AI will likely be essential to career success.
    • Evaluating common AI fears and reasonable concerns.
    • Cautious optimism: what to expect from AI in the future and "Artificial General Intelligence" (AGI).
    • From using AI tools to "collaborating" with them.
    • How the AI revolution is different than other tech revolutions.

SESSION 2
FRIDAY, JUNE 7, 2024, 2:00 - 3:00 PM US-EASTERN TIME: "Enhancing Research and Information Literacy with ChatGPT"

  • INTRODUCTION
    • The "all-too-human" AI: How our brains and AI are trained in similar ways and what that means.
    • Being careful: apparent AI sentience and awareness, "hallucinations," biases, and programmed rapport.
    • Truth, trust, and false expectations.
  • RESEARCH
    • ChatGPT for conducting literature review, locating resources, refining search queries, and summarizing research findings.
    • Ordering, synthesizing, and organizing information sources.
  • CRITICAL THINKING AND INFORMATION LITERACY
    • Leveraging ChatGPT to promote critical thinking.
    • Evaluating information authenticity and quality-checking materials.
    • The implications of acting on incorrect information.
    • Identifying misinformation and bias.
  • ETHICAL CONSIDERATIONS
    • Responsible AI use.
    • Access, accessibility, and equity: challenges and opportunities.
    • Privacy risks and considerations.
    • Copyright law and the unauthorized use of creative and copywritten content.

SESSION 3
FRIDAY, JUNE 14, 2024, 2:00 - 3:00 PM US-EASTERN TIME: "ChatGPT for Personal, Professional, and Program Growth"

  • PERSONAL AND PROFESSIONAL GROWTH
    • ChatGPT to deepen subject knowledge and remain current on interdisciplinary research.
    • ChatGPT to acquire new skills, identify relevant learning opportunities, network, personalize learning plans, map career development, set professional goals, and track progress.
    • Customizing ChatGPT interactions to specific needs, preferences, and learning styles.
    • Developing structured AI training and professional development.
    • Building a culture of innovation and continuous learning.
    • Inspiring creativity, fostering a culture of experimentation, and driving continuous improvement.
  • LIBRARY SERVICES
    • Integrating ChatGPT into libraries services to improve the overall library experience for patrons and staff.
    • Program and event creation with ChatGPT - brainstorm, design, and develop educational programs and workshops.
    • ChatGPT support for answering patron questions, streamlining research processes, and efficiently locating relevant resources.
  • COLLECTION DEVELOPMENT
    • ChatGPT for collection development and management.
    • ChatGPT for analyzing user data, automating processes, identifying trends in library usage, cataloging and digitizing efforts, and making informed decisions about acquisitions and weeding.
  • ACADEMIC AND EMPLOYMENT SUPPORT
    • ChatGPT to assist students with research, essay writing, brainstorming, citation management, critical thinking, and originality.
    • Supporting digital literacy skills: educating patrons and students about AI ethics.
    • Helping patrons and students unlock ChatGPT's potential from personal learning to résumé building, cover letter writing, interview preparation, and career exploration.
  • COMMUNITY ENGAGEMENT 
    • Introducing ChatGPT to your community.
    • Enhancing community outreach to better serve diverse patron population and tailor library services to different community needs.

COST:

Cost includes live attendance, any-time access to the recordings and the presentation slides, and receipt of a participation certificate. (Please note that this bootcamp is not a part of Library 2.0's Service, Safety, and Security series, and is therefore not included in the all-access program for Dr. Albrecht's webinars.)

IMPORTANT NOTE: If you or your organization paid for an individual (not institutional) registration for the 2023 ChatGPT Bootcamp for Libraries and Librarians, this event is free to you. Please use the online form here.

  • $149/person single participant (entitles you to subsequent repeats of this bootcamp for free)
  • $129/person for 2 - 4 participants from the same organization
  • $599 Institutional License I (up to 100 staff)
  • $799 Institutional License II (up to 500 staff)
  • $999 Institutional License III (unlimited staff)

REGISTRATION: 

1. Single registration, paying by credit card:

2. Registration requiring invoicing (single, group, or institutional):


NOTE: please check your spam folder if you don't receive your confirmation email right away. Emails come from library20email@gmail.com or admin@library20.com and the phrase "ChatGPT" seems to be triggering spam filtering in many cases. For any registration difficulties or questions, email admin@library20.com.

If you pay by credit card (either using the single registration above or after receiving an electronic invoice) you will receive an email within a day with a confirmation email and more information on attending the sessions and/or accessing the recordings afterward. Recordings will be posted the Monday following each session.

 

STEVE HARGADON

Steve has owned and run Library 2.0 since 2010, growing it to 53,000 members and providing mini-conferences and webinars for the library community.

Steve is the founder and director of the Learning Revolution Project, the host of the Future of Education and Reinventing School interview series, and has been the founder and chair (or co-chair) of a number of annual worldwide virtual events, including the Global Education Conference and the Library 2.0 series of mini-conferences and webinars. He has run over 100 large-scale events, online and in person.

Steve's work has been around the democratization of learning and professional development. He supported and encouraged the development of thousands of other education-related networks, particularly for professional development, and he pioneered the use of live, virtual, and peer-to-peer education conferences. He popularized the idea of "unconferences" for educators, and for over a decade, he ran a large annual ed-tech unconference, now called Hack Education (previously EduBloggerCon).

Steve himself built one of the first modern social networks for teachers in 2007 (Classroom 2.0), developed the "conditions of learning" exercise for local educational conversation and change, and inherited and grew the Library 2.0 online community. He may or may not have invented an early version of the Chromebook which he demo'd to Google. He blogs, speaks, and consults on education, educational technology, and education reform, and his virtual and physical events and online communities have over 150,000 members.

His professional website is SteveHargadon.com.

 

Some Bootcamp Testimonials from 2023 Participants

“The feedback from the first session has been wonderful. Personally, I really knew zero about ChatGPT and now I am intrigued and a little scared (because it looks addictive).” “Thank you, and thank you so much for this boot camp!!! ” “Many thanks Steve – and special thanks for this series – so helpful in our work we do with schools.” “I wanted to express my gratitude for the first course and I am excited for the upcoming one on Friday. It's been an excellent learning opportunity, and I'm looking forward to diving deeper into AI language models.” “Thank you!! Such a good presentation.” “Best session I've attended in a very long time!” “Thank you! This was great!!” “Very helpful! Thanks” “Thank you,this was great!” “Fantastic, thank you - can't wait for the next sessions :)” “Fascinating! Thank you” “Thanks for this, very informative and thought provoking” “Thank you for a very informative session!” “This was great. Thank you.” “This was a thought-provoking session! Happy to hear a recording will be available. I need to refer back to it to review/reflect on these nuggets of info! Looking fwd to the next one!” “This is fascinating. I can't wait for the next two courses.” “Really well done, thanks!” “Thank you!!!! So very excited!” “Thought provoking. Thank you.” “Thank you! So informative and helpful.” “Thank you for making ChatGPT so much more approachable, less intimidating” “This was a thought-provoking session!” “I am learning so much from your ChatGPT Bootcamp and am loving the sessions: thank you!” “Thank you so much for such great presentations.” “Thanks for this! Super interesting.” “Thank you so much.. very informative and eye opening” “Thank you for this valuable information!” “Thank you so much. Sessions are extremely valuable.” “Thanks! This boot camp has been a big hit so far :) ” “This has been fantastic!” “I cannot thank you enough for this extremely timely and informative series. You do an excellent job of organizing your information, engaging with your audience, and giving us practical takeaways.” “This is a fantastic series and I am so grateful that you are doing this!” “Thanks so much for these wonderful Bootcamp sessions on ChatGPT.” “Thank you for your very thoughtful approach to the bootcamp! ” “A note of gratitude for providing these webinars for the world! My Library colleagues and I attended your Bootcamp for Librarians and were so impressed with your content and delivery that we wanted our teachers to learn from you too!” • “Lots of useful information. Looking forward to having access to slide decks and resources to make use of in my job.” “Thank you!” • “This was great. Lots to learn.” “Thank you it was so beneficial.” “It's was great. ChatGPT is new to me, so now I want to dig deeper, learn how to use it well and help my students to do so.”