This is a remarkably insightful and timely piece — the kind of nuanced, forward-looking analysis universities desperately need as they scramble to respond to the generative AI wave. Rather than treating “AI” as a monolith, the author wisely breaks it down into three distinct categories — chatbots, APIs, and local LLMs (aka “local llamas”) — each with vastly different implications for research ethics, data governance, and academic freedom.
The observation that ChatGPT is merely the “entry-level tool” rings especially true. It’s the flashy front door everyone walks through, but serious researchers will soon migrate to APIs and, eventually, local models like Llama 2 — not because they’re trendier, but because they offer control, customization, and crucially, privacy. The comparison to uploading sensitive data to an API versus analyzing it locally — “with the same level of risk you’d have using Microsoft Excel” — is brilliant in its simplicity. That’s the kind of framing ethics boards and administrators need to hear.
The copyright and REB (Research Ethics Board) questions raised here are thorny and unresolved — and universities pretending they can be solved with blanket bans are setting themselves up for irrelevance. As the author notes, the real challenge isn’t technical; it’s institutional. Do REB members understand temperature settings? Do administrators know the difference between GPT-4 via Azure and a fine-tuned local Llama? Probably not — and that knowledge gap is where policy fails.
One minor quibble: while the article dives deep into tools for code, text, and data, it doesn’t touch on AI’s role in creative or cultural domains — like, say, generating authentic <a href="https://chinesenamesgenerator.com"> chinese names</a> for anthropological research or historical fiction. Even in the humanities, scholars are beginning to explore how LLMs can assist with culturally nuanced outputs — another area where local, fine-tuned models could offer safer, more ethical alternatives to cloud-based chatbots.
Ultimately, this article doesn’t just diagnose the problem — it points toward a solution: granular, tool-specific policies built by people who actually understand the technology. That’s a tall order, but as Llama 2 and its successors become more accessible, universities have no excuse not to start building that expertise — fast.
Bravo. This should be required reading for every university provost, research dean, and IT policy committee.
This is a remarkably insightful and timely piece — the kind of nuanced, forward-looking analysis universities desperately need as they scramble to respond to the generative AI wave. Rather than treating “AI” as a monolith, the author wisely breaks it down into three distinct categories — chatbots, APIs, and local LLMs (aka “local llamas”) — each with vastly different implications for research ethics, data governance, and academic freedom.
The observation that ChatGPT is merely the “entry-level tool” rings especially true. It’s the flashy front door everyone walks through, but serious researchers will soon migrate to APIs and, eventually, local models like Llama 2 — not because they’re trendier, but because they offer control, customization, and crucially, privacy. The comparison to uploading sensitive data to an API versus analyzing it locally — “with the same level of risk you’d have using Microsoft Excel” — is brilliant in its simplicity. That’s the kind of framing ethics boards and administrators need to hear.
The copyright and REB (Research Ethics Board) questions raised here are thorny and unresolved — and universities pretending they can be solved with blanket bans are setting themselves up for irrelevance. As the author notes, the real challenge isn’t technical; it’s institutional. Do REB members understand temperature settings? Do administrators know the difference between GPT-4 via Azure and a fine-tuned local Llama? Probably not — and that knowledge gap is where policy fails.
One minor quibble: while the article dives deep into tools for code, text, and data, it doesn’t touch on AI’s role in creative or cultural domains — like, say, generating authentic <a href="https://chinesenamesgenerator.com"> chinese names</a> for anthropological research or historical fiction. Even in the humanities, scholars are beginning to explore how LLMs can assist with culturally nuanced outputs — another area where local, fine-tuned models could offer safer, more ethical alternatives to cloud-based chatbots.
Ultimately, this article doesn’t just diagnose the problem — it points toward a solution: granular, tool-specific policies built by people who actually understand the technology. That’s a tall order, but as Llama 2 and its successors become more accessible, universities have no excuse not to start building that expertise — fast.
Bravo. This should be required reading for every university provost, research dean, and IT policy committee.
This is a remarkably insightful and timely piece — the kind of nuanced, forward-looking analysis universities desperately need as they scramble to respond to the generative AI wave. Rather than treating “AI” as a monolith, the author wisely breaks it down into three distinct categories — chatbots, APIs, and local LLMs (aka “local llamas”) — each with vastly different implications for research ethics, data governance, and academic freedom.
The observation that ChatGPT is merely the “entry-level tool” rings especially true. It’s the flashy front door everyone walks through, but serious researchers will soon migrate to APIs and, eventually, local models like Llama 2 — not because they’re trendier, but because they offer control, customization, and crucially, privacy. The comparison to uploading sensitive data to an API versus analyzing it locally — “with the same level of risk you’d have using Microsoft Excel” — is brilliant in its simplicity. That’s the kind of framing ethics boards and administrators need to hear.
The copyright and REB (Research Ethics Board) questions raised here are thorny and unresolved — and universities pretending they can be solved with blanket bans are setting themselves up for irrelevance. As the author notes, the real challenge isn’t technical; it’s institutional. Do REB members understand temperature settings? Do administrators know the difference between GPT-4 via Azure and a fine-tuned local Llama? Probably not — and that knowledge gap is where policy fails.
One minor quibble: while the article dives deep into tools for code, text, and data, it doesn’t touch on AI’s role in creative or cultural domains — like, say, generating authentic <a href="https://chinesenamesgenerator.com">chinese names</a> for anthropological research or historical fiction. Even in the humanities, scholars are beginning to explore how LLMs can assist with culturally nuanced outputs — another area where local, fine-tuned models could offer safer, more ethical alternatives to cloud-based chatbots.
Ultimately, this article doesn’t just diagnose the problem — it points toward a solution: granular, tool-specific policies built by people who actually understand the technology. That’s a tall order, but as Llama 2 and its successors become more accessible, universities have no excuse not to start building that expertise — fast.
Bravo. This should be required reading for every university provost, research dean, and IT policy committee.
Mark, This is the clearest and most straightforward explanation of these issues and tools I have come across.
I am busily trying to educate myself on how our Organization (a Homeowners Association) can use AI to make access to information easier for our Board and for our Members. Initially we will deploy a chat service trained on our website contents and key documentation to make our mostly elderly members more successful in finding information and guidance that they want, but I want to expand that to provide insight into the decades of records we have accrued since 1990.
This is a remarkably insightful and timely piece — the kind of nuanced, forward-looking analysis universities desperately need as they scramble to respond to the generative AI wave. Rather than treating “AI” as a monolith, the author wisely breaks it down into three distinct categories — chatbots, APIs, and local LLMs (aka “local llamas”) — each with vastly different implications for research ethics, data governance, and academic freedom.
The observation that ChatGPT is merely the “entry-level tool” rings especially true. It’s the flashy front door everyone walks through, but serious researchers will soon migrate to APIs and, eventually, local models like Llama 2 — not because they’re trendier, but because they offer control, customization, and crucially, privacy. The comparison to uploading sensitive data to an API versus analyzing it locally — “with the same level of risk you’d have using Microsoft Excel” — is brilliant in its simplicity. That’s the kind of framing ethics boards and administrators need to hear.
The copyright and REB (Research Ethics Board) questions raised here are thorny and unresolved — and universities pretending they can be solved with blanket bans are setting themselves up for irrelevance. As the author notes, the real challenge isn’t technical; it’s institutional. Do REB members understand temperature settings? Do administrators know the difference between GPT-4 via Azure and a fine-tuned local Llama? Probably not — and that knowledge gap is where policy fails.
One minor quibble: while the article dives deep into tools for code, text, and data, it doesn’t touch on AI’s role in creative or cultural domains — like, say, generating authentic <a href="https://chinesenamesgenerator.com"> chinese names</a> for anthropological research or historical fiction. Even in the humanities, scholars are beginning to explore how LLMs can assist with culturally nuanced outputs — another area where local, fine-tuned models could offer safer, more ethical alternatives to cloud-based chatbots.
Ultimately, this article doesn’t just diagnose the problem — it points toward a solution: granular, tool-specific policies built by people who actually understand the technology. That’s a tall order, but as Llama 2 and its successors become more accessible, universities have no excuse not to start building that expertise — fast.
Bravo. This should be required reading for every university provost, research dean, and IT policy committee.
This is a remarkably insightful and timely piece — the kind of nuanced, forward-looking analysis universities desperately need as they scramble to respond to the generative AI wave. Rather than treating “AI” as a monolith, the author wisely breaks it down into three distinct categories — chatbots, APIs, and local LLMs (aka “local llamas”) — each with vastly different implications for research ethics, data governance, and academic freedom.
The observation that ChatGPT is merely the “entry-level tool” rings especially true. It’s the flashy front door everyone walks through, but serious researchers will soon migrate to APIs and, eventually, local models like Llama 2 — not because they’re trendier, but because they offer control, customization, and crucially, privacy. The comparison to uploading sensitive data to an API versus analyzing it locally — “with the same level of risk you’d have using Microsoft Excel” — is brilliant in its simplicity. That’s the kind of framing ethics boards and administrators need to hear.
The copyright and REB (Research Ethics Board) questions raised here are thorny and unresolved — and universities pretending they can be solved with blanket bans are setting themselves up for irrelevance. As the author notes, the real challenge isn’t technical; it’s institutional. Do REB members understand temperature settings? Do administrators know the difference between GPT-4 via Azure and a fine-tuned local Llama? Probably not — and that knowledge gap is where policy fails.
One minor quibble: while the article dives deep into tools for code, text, and data, it doesn’t touch on AI’s role in creative or cultural domains — like, say, generating authentic <a href="https://chinesenamesgenerator.com"> chinese names</a> for anthropological research or historical fiction. Even in the humanities, scholars are beginning to explore how LLMs can assist with culturally nuanced outputs — another area where local, fine-tuned models could offer safer, more ethical alternatives to cloud-based chatbots.
Ultimately, this article doesn’t just diagnose the problem — it points toward a solution: granular, tool-specific policies built by people who actually understand the technology. That’s a tall order, but as Llama 2 and its successors become more accessible, universities have no excuse not to start building that expertise — fast.
Bravo. This should be required reading for every university provost, research dean, and IT policy committee.
This is a remarkably insightful and timely piece — the kind of nuanced, forward-looking analysis universities desperately need as they scramble to respond to the generative AI wave. Rather than treating “AI” as a monolith, the author wisely breaks it down into three distinct categories — chatbots, APIs, and local LLMs (aka “local llamas”) — each with vastly different implications for research ethics, data governance, and academic freedom.
The observation that ChatGPT is merely the “entry-level tool” rings especially true. It’s the flashy front door everyone walks through, but serious researchers will soon migrate to APIs and, eventually, local models like Llama 2 — not because they’re trendier, but because they offer control, customization, and crucially, privacy. The comparison to uploading sensitive data to an API versus analyzing it locally — “with the same level of risk you’d have using Microsoft Excel” — is brilliant in its simplicity. That’s the kind of framing ethics boards and administrators need to hear.
The copyright and REB (Research Ethics Board) questions raised here are thorny and unresolved — and universities pretending they can be solved with blanket bans are setting themselves up for irrelevance. As the author notes, the real challenge isn’t technical; it’s institutional. Do REB members understand temperature settings? Do administrators know the difference between GPT-4 via Azure and a fine-tuned local Llama? Probably not — and that knowledge gap is where policy fails.
One minor quibble: while the article dives deep into tools for code, text, and data, it doesn’t touch on AI’s role in creative or cultural domains — like, say, generating authentic <a href="https://chinesenamesgenerator.com">chinese names</a> for anthropological research or historical fiction. Even in the humanities, scholars are beginning to explore how LLMs can assist with culturally nuanced outputs — another area where local, fine-tuned models could offer safer, more ethical alternatives to cloud-based chatbots.
Ultimately, this article doesn’t just diagnose the problem — it points toward a solution: granular, tool-specific policies built by people who actually understand the technology. That’s a tall order, but as Llama 2 and its successors become more accessible, universities have no excuse not to start building that expertise — fast.
Bravo. This should be required reading for every university provost, research dean, and IT policy committee.
Mark, This is the clearest and most straightforward explanation of these issues and tools I have come across.
I am busily trying to educate myself on how our Organization (a Homeowners Association) can use AI to make access to information easier for our Board and for our Members. Initially we will deploy a chat service trained on our website contents and key documentation to make our mostly elderly members more successful in finding information and guidance that they want, but I want to expand that to provide insight into the decades of records we have accrued since 1990.
Many thanks and look for ward to more from You!