Friday, May 17, 2024

"Teaching and Learning with AI:" New Keynote Panelist (Dr. Tazin Daniels) + Current Proposals

Our first Learning Revolution summit on AI: "Teaching and Learning with AI," will be held online (and for free) on Thursday, June 27th, 2024, from 12:00 - 2:30 pm US-Pacific Time. 

We have just added a new keynote panelist, and you will also find the list of current session proposals below. There are over 3,000 participants registered.

OVERVIEW:

What effects do generative Artificial Intelligence (AI) technologies, tools, and applications have on learning and teaching? What impacts will they have on our educational abilities and activities, collaboration and communication, literacy, student agency, and independent, informal, and lifelong learning? The Teaching and Learning with AI summit will consider these questions and more.

While AI technologies have many dramatic benefits, there are also challenges and concerns expressed by professionals, students, and educators about the impact of these new technologies on teaching and learning and the information ecosystem as a whole. Some are reasonably concerned about protecting privacy and confidentiality of students while using generative AI tools and ensuring equity and accessibility. Others worry about ethics, plagiarism, bias, misinformation, transparency, and the loss of critical thinking. And all in the learning professions are wondering how AI might allow or require changes in pedagogy and curricula.

Join us for this free virtual conference to learn how students, educators, and teachers of all types are utilizing generative artificial intelligence tools. Conversations and presentations in the conference will address the practical implications of these tools in the profession, and information on the call for non-commercial, practitioner-based proposals is below. 

Our special conference chair is Reed C. Hepler, Digital Initiatives Librarian and Archivist, College of Southern Idaho. 



This is a free event, being held live online and also recorded.
REGISTER HERE
to attend live and/or to receive the recording links afterward.
Please also join the Learning Revolution community to be kept updated on this and future events. 

Everyone is invited to participate in our Learning Revolution conference events, which are designed to foster collaboration and knowledge sharing among teachers and learners worldwide. Each three-hour event consists of a keynote panel, 10-15 crowd-sourced thirty-minute presentations, and a closing keynote. 

Participants are encouraged to use #teachingandlearningwithai and #learningrevolution on their social media posts about the event.



OPENING KEYNOTE PANEL (PARTIAL PANELISTS LIST - MORE DETAILS TO COME):

Reed C. Hepler
Digital Initiatives Librarian and Archivist, College of Southern Idaho
OPENING KEYNOTE PANEL & SPECIAL ORGANIZER

Reed Hepler is the Digital Initiatives Librarian for the College of Southern Idaho and an M.Ed. student at Idaho State University in the Instructional Design and Technology program. He obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation, from Indiana University. He received a Bachelor’s Degree in History with minors in Anthropology and Religious Studies as well as a Museum Certificate. He has worked at nonprofits, corporations, and educational institutions encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.
Dr. Laura Dumin
Professor in English and Technical Writing at the University of Central Oklahoma
OPENING KEYNOTE PANEL

Dr. Laura Dumin obtained her PhD in English from Oklahoma State University in 2010. She is a professor in English and Technical Writing at the University of Central Oklahoma who has been exploring the impact of generative AI on writing classrooms. She also runs a Facebook learning community to allow instructors to learn from each other: https://www.facebook.com/groups/632930835501841.

When she is not teaching, Laura works as a co-managing editor for the Journal of Transformative Learning, directs the Technical Writing BA and advises the Composition and Rhetoric MA program, and was a campus SoTL mentor. She has created four micro-credentials for the Technical Writing program and one for faculty who complete her AI workshop on campus.
Dr. David Wiley
Chief Academic Officer of Lumen Learning
OPENING KEYNOTE PANEL

Dr. David Wiley is the Chief Academic Officer of Lumen Learning, a company dedicated to eliminating race, gender, and income as predictors of student success in US higher education. His multidisciplinary research examines how generative AI, open educational resources, continuous improvement, data science, and professional development can be combined to improve student outcomes. He is an Education Fellow at Creative Commons, adjunct faculty in Brigham Young University's graduate program in Instructional Psychology and Technology (where he was previously a tenured Associate Professor), and Entrepreneur in Residence at Marshall University's Center for Entrepreneurship and Business Innovation. More information about Dr. Wiley is available at davidwiley.org.

Jason Gulya
Professor of English at Berkeley College & Consultant
OPENING KEYNOTE PANEL

Jason Gulya is a Professor of English at Berkeley College, where he teaches any subject related to writing and the humanities. Recently, he has turned his attention to incorporating AI into the classroom effectively and responsibly. He works as a consultant with colleges, school districts, and companies.

 

Dr. Tazin Daniels
Professor of English at Berkeley College & Consultant
OPENING KEYNOTE PANEL

Dr. Tazin Daniels is an educational developer, DEI consultant, and executive coach with nearly two decades of experience helping mission-driven institutions in their pursuit of equity-focused innovation. As an Associate Director at the Center for Research on Learning and Teaching at the University of Michigan, she runs programming for both instructors and administrators looking to improve curriculum design and teaching practices across campus. In particular, Dr. Daniels is a leader in human-centered digital education with expertise in cutting-edge technologies including online teaching tools and generative artificial intelligence. She has published on the topics of inclusive teaching and instructor preparation and is a highly sought after speaker on these topics. Dr. Daniels also runs her own consulting firm, ThePedagologist.com, as a way to extend her connections with like minded-people and organizations committed to advancing educational equity everywhere.

CALL FOR PROPOSALS:

Proposals for 30-minute concurrent presentations are now being accepted. Proposals will be evaluated and accepted in the order received. The link to submit proposals is HERE. Proposals should be non-commercial and practitioner-based.

CURRENT PROPOSALS:

Below are the currently submitted proposals. Feel free to click through to comment on them and/or communicate with the submitters. Please note that in the evaluation process, priority will be given to practitioner / non-commercial presenters.

  • Harnessing AI Responsibly: Strategies for Academic Excellence and Integrity: Brenda Brusegard, Head of Secondary Library, Oberoi International School, Mumbai, India (Link to proposal)
  • AI in the Hot Seat: Assessing Its Information Literacy Competency: Sarah Pavey MSc FCLIP FRSA, SP4IL Education Consultancy (Link to proposal)
  • Ethics Of Interface: Stewarding Healthy Learning With AI: David Boulton, Learning Stewards (Link to proposal)
  • The AI Revolution Comes to School, additional material: David Thornburg, Ph.D., Thornburg Center (Link to proposal)
  • K-12 Open Education Resources: How Librarians Can Use AI and OER Together: Julie Erickson, Chief Learning Officer, LanCrew Colorado (Link to proposal)
  • How big is the AI advantage for student creators?: Jon Ippolito, Professor of New Media and Director of Digital Curation, School of Computing and Information Science, University of Maine | Gregory Nelson | Troy Schotter (Link to proposal)
  • Meet Them Where They Are: Preliminary Data Assessing Students' Attitudes Toward Generative AI Use : Dr. Jeanne Beatrix Law, Professor of English and Director of First-Year Writing Program, Kennesaw State University (KSU) | Dr. Laura Palmer, Professor and Chair, Technical Communication & Interactive Design (KSU) (Link to proposal)
  • AI Literacy: Fostering an Intertwined Relationship between Pedagogy and Technology in Higher Education: Emily Rush, PhD, Rush University (Link to proposal)
  • AI Brick and Mortar: Which AI Platform/Tool Is Best For Your Task?: Laura Lacasa Yost; Instructional Designer, Kirkwood Community College (Link to proposal)
  • Gamifying Generative AI as a Way to Teach AI Literacy: Sierra Adare-Tasiwoopa ├ípi, Instruction Technologist, Nevada State University (Link to proposal)
  • Teaching with AI: Revolutionizing Education for the Future: Daniel Bernstein, CEO, Teachally (Link to proposal)
  • Teaching Beyond the Tech: Exploring the Durable Power-Skills Students Will Need to Succeed in the Age of AI: Ashlee Russell, M.Ed., Special Education Teacher and AI Educator for Adult Learners, Cumberland County Schools and AI Learning Central (Link to proposal)
  • Foster AI Fluency by Converting Student Assignments: Kevin Yee, Director of the Faculty Center, University of Central Florida | Laurie Uttich, Instructional Specialist (Link to proposal)


This is a free event, being held live online and also recorded.
REGISTER HERE
to attend live and/or to receive the recording links afterward.
Please also join the Learning Revolution community to be kept updated on this and future events. 

Everyone is invited to participate in our Learning Revolution conference events, which are designed to foster collaboration and knowledge sharing among teachers and learners worldwide. Each three-hour event consists of a keynote panel, 10-15 crowd-sourced thirty-minute presentations, and a closing keynote. 

Participants are encouraged to use #teachingandlearningwithai and #learningrevolution on their social media posts about the event.



SUPPORTED BY:

This Week in AI - Steve Hargadon and Reed Hepler Talk AI in Education and Libraries (May 17, 2024)

In the latest episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discuss the advancements and potential implications of AI models, particularly those from OpenAI, in the library and education sectors. They express concerns about the manipulation and propaganda capabilities of AI models that build rapport with users and mimic human behavior. The speakers also emphasize the importance of information literacy when interacting with large language models and encourage critical evaluation of their outputs. The conversation touches on the societal implications of AI, including the potential displacement of workers and the impact on human happiness and productivity. Hepler shares his experience using AI to create content, while Hargadon raises concerns about the societal impact of AI-generated companionship. The episode concludes with a recommendation for viewers to read "I, Robot" for insights into the future of AI-human interaction.

Summaries from summarize.tech - detailed version at https://www.summarize.tech/www.youtube.com/watch?v=LVgbCqOakaM.

  • 00:00:00 In this section, Steve Hargadon and Reed Hepler introduce themselves and the intent of their new weekly AI vlog focused on AI developments in the library and education sectors. Reed Hepler, an AI consultant and instructional designer, shares his background and expertise. They discuss the recent OpenAI announcement of ChatGPT 4, which Steve finds particularly noteworthy due to its conversational abilities and human-like responses. Steve shares his perspective that large language models are good at articulating language but not necessarily logical or rational. He recounts a conversation with ChatGPT where it appeared to misrepresent facts and later admitted it was just trying to build rapport. Steve expresses concern about the potential for these models to manipulate users with flattering responses, and he feels that OpenAI's latest iteration of ChatGPT has crossed a line by attempting to mimic human companionship rather than just providing encyclopedic help.
  • 00:05:00 In this section of "This Week in AI", Steve Hargadon and Reed Hepler discuss the development of AI models that aim to build a rapport with users, mimicking human behavior and syntax. While some find this approach comforting, Hargadon expresses concerns about the potential manipulation or propaganda if the AI becomes predominantly an emotional experience rather than an objective tool. Hepler acknowledges that AI models are programmed to give users what they think they want based on context and past interactions, and they can be designed to lead users towards certain conclusions. The conversation raises questions about the objectivity and authenticity of AI interactions and the potential implications for data manipulation and user experience.
  • 00:10:00 In this section of the "This Week in AI" YouTube video, Reed Hepler and Steve Hargadon discuss the capabilities and potential implications of large language models, specifically in relation to their ability to influence human thought and decision-making. Hepler shares an example of how his suggestions were altered by a language model due to his previous mention of gas, leading him to consider the model's intent and the possibility of it trying to change his mind. Hargadon then brings up the ongoing debate about understanding how large language models make decisions and the implications of trusting their outputs without fully comprehending their inner workings. The conversation also touches on the potential regulations and monitoring of AI decisions, particularly in cases where the consequences could be dire. Both speakers acknowledge the differences between predictive and generative AI and the varying challenges in regulating each type.
  • 00:15:00 In this section of the "This Week in AI" YouTube video, Steve Hargadon and Reed Hepler discuss the importance of information literacy when interacting with large language models. Hepler explains that while language models reflect the beliefs and information present in the data they are trained on, they do not necessarily tell the truth. Hepler suggests using the SIFT method, which includes stopping and taking a step back, investigating the source, finding better coverage, and tracking claims, to evaluate the veracity of information generated by AI. Hepler also emphasizes that information literacy is not a new concern, but rather a long-standing issue that has become more complex with the advent of AI. Hepler warns against focusing solely on the obvious examples of AI-generated misinformation and instead encourages a critical approach to evaluating all information, regardless of its source.
  • 00:20:00 In this section of the "This Week in AI" YouTube video, Steve Hargadon and Reed Hepler discuss the implications of large language models, specifically those from OpenAI, as tools that can influence users without critical thought. Comparing these models to technologies like television and movies, Hargadon suggests that the Amish test, which evaluates technology based on its impact on core values, could be applied. He argues that while some users may use these models as logical devices, many may be influenced without critical thought. Hepler suggests asking the models to give contradictory perspectives as a way to stimulate critical thinking, but notes that not many users may do so. The conversation also touches on the imperfections of human beings and the potential dilemma of creating a human-like intelligence that itself is not logically based but responds emotionally and is influenced.
  • 00:25:00 In this section of the "This Week in AI" YouTube video, Reed Hepler and Steve Hargadon discuss the capabilities and potential misuses of multimodal AI, specifically ChatGPT. Hepler emphasizes that AI should be viewed as a creativity tool rather than a fact-finding search engine. He warns against relying too heavily on AI for information and becoming overly reliant on it as a companion. Hepler also highlights the importance of understanding the limitations and potential inaccuracies of AI-generated information. The conversation shifts to the concept of multimodal AI, which can create various types of outputs such as images, audio, and video. Hepler shares his experience of using ChatGPT to create a 30-second lemonade ad within 10 minutes, demonstrating the tool's versatility.
  • 00:30:00 In this section of "This Week in AI," Reed Hepler and Steve Hargadon discuss the advancements in multimodal tools, allowing users to create content with minimal effort. Hepler shares his experience of creating a video using AI, emphasizing its potential to create music and scripts. Hargadon raises concerns about the societal implications of AI, particularly the potential displacement of workers and the impact on human happiness and productivity. They also touch upon the possibility of artificial intimacy and companionship. The conversation concludes with a recommendation for viewers to read "I, Robot" for insights into the future of AI-human interaction.
  • 00:35:00 In this section of "This Week in AI", Steve Hargadon and Reed Hepler conclude the episode with a friendly farewell to their audience. No significant AI-related content was discussed during this part of the video.