That’s my motto as the summer semester starts (orientations today).
There will be a million more of these articles:
That’s my motto as the summer semester starts (orientations today).
There will be a million more of these articles:
The democratization of Artificial Intelligence and, specifically, the generative models boom seems to have changed everything. The same is true in terms of how we interact with machines. Conversational models such as ChatGPT or Bard and generative systems like Midjourney and Dall-e 2 are unpredictable and they are constantly learning. Consequently, to obtain quality answers, the questions we ask and how we ask them are increasingly crucial.
All this has led to the rise of a new job: the “prompt engineer.” This new professional is extremely well-remunerated and in high demand. On the one hand, these engineers are responsible for training AI with natural language and, on the other, thoroughly checking search results to create the perfect “prompts.” …
Andrej Karpathy, well-known scientist and co-founder of OpenAI, refers to these new professionals as “AI psychologists.” The idea behind this term is that psychology can play a crucial role in developing and applying this technology. Psychologists can provide insights on the human mind, cognition, behavior and interactions, something which may be fundamental to be able to design more effective, ethical and user-centric AI systems.
Since the earliest days of print journalism, illustration has been used to elucidate and add perspective to stories. Even with the advent of photography in the 19th century, hand-drawn illustrations continued to have their place, both as a synthesis of the artist’s vision and the writer’s meaning. The illustrator’s art still speaks to something not just intimately connected to the news, but intrinsically human about story itself.
With the advent of generative-image AI technology, that unique interpretive and narrative confluence of art and text, of human writer and human illustrator, is at risk of extinction.
Based on text prompts, these generative tools can churn out polished, detailed simulacra of what previously would have been illustrations drawn by the human hand. They do so for a few pennies or for free, and they are faster than any human can ever be. Because no human illustrator can work quickly enough or cheaply enough to compete with these robot replacements, we know that if this technology is left unchecked, it will radically reshape the field of journalism. The result will be that only a tiny elite of artists can remain in business, their work selling as a kind of luxury status symbol.
AI-art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It’s daylight robbery.
If you think this sounds alarmist, consider that AI-generated work has already been used for book covers and as editorial illustration, displacing illustrators from their livelihood. As a result artists and illustrators have already started suing certain creators of AI art generators for copyright infringement.
Why, beyond the immediate effect on individual artists, does this matter? AI purports to have the capability to create art, but it will never be able to do so satisfactorily because its algorithms can only create variations of art that already exists. It creates only ersatz versions of illustrations having no actual insight, wit, or originality. Generative AI art is vampirical, feasting on past generations of artwork even as it sucks the lifeblood from living artists. Over time, this will impoverish our visual culture. Consumers will be trained to accept this art-looking art, but the ingenuity, the personal vision, the individual sensibility, the humanity will be missing.
This is also an economic choice for society. While illustrators’ careers are set to be decimated by generative-AI art, the companies developing the technology are making fortunes. Silicon Valley is betting against the wages of living, breathing artists through its investment in AI.
Generative-art AI is just beginning. If illustrators want to stay illustrators, the time to fight is now. Molly Crabapple and the Center for Artistic Inquiry and Reporting call on artists, publishers, journalists, editors, and journalism union leaders to take a pledge for human values against the use of generative-AI images to replace human-made art.
Media publishing takes intellectual property rights very seriously. Its business would not exist without upholding the laws and values that protect such rights. If newsrooms aim to resist corporate theft, they must commit to supporting editorial art made by people, not server farms.
Longtime NoContest.ca friend Chet Wisniewski has “Three Cybercrime Predictions in the Age of ChatGPT.” I don’t know anyone who writes more clearly and helpfully on these things.
Organizations (like my own) have trained their employees to recognize phishing and other types of scams. Such training, it seems, will likely be almost useless going forward. In this piece written for the Forbes Technology Council, Chet writes,
We’ve relied on end users to recognize potential phishing attacks and avoid questionable Wi-Fi—despite the fact that humans aren’t generally as good at recognizing fraud as we believe.
Still, employees have previously had some success in spotting fishy messages by recognizing “off-sounding” language. For example, humans can notice language irregularities or spelling and grammar errors that signal phishing attempts, like a supposed email from an American bank using British English spelling.
AI language and content generators, such as ChatGPT, will likely remove this final detectable element of scams, phishing attempts and other social engineering attacks. A supposed email from “your boss” could look more convincing than ever, and employees will undoubtedly have a harder time discerning fact from fiction. In the case of these scams, the risks of AI language tools aren’t technical. They’re social—and more alarming.
Developing programs to detect ChatGPT content and to warn users will run into this dilemma, though:
Many legitimate users already using the tool to quickly create business or promotional content. But legitimate use of AI language tools will complicate security responses by making it more difficult to identify criminal instances.
For example, not all emails that include ChatGPT-generated text are malicious, so we can’t simply detect and block them as a blanket rule. This removes a level of certainty from our security response. Security vendors may develop “confidence scores” or other indicators that rate the likelihood that a message or email is AI-generated. Similarly, vendors may train AI models to detect AI-generated text and add a warning banner to user-facing systems. In certain cases, this technology could filter messages from an employee’s inbox.
It’s a thrilling and unnerving time to be a business communications professor. I have a ton to learn and think about before my next term starts.
It’s worth it for me to remember that, because I am a bit of a snob formalist when it comes to evaluating published writing (prose or verse).
What books are on your nightstand?
I take it you mean the imaginary Doric column that supports a teetering pile of current and old books that the interviewee wants to bring to the reader’s attention. My actual nightstand is a small wood table with a box of Kleenex, a two-year-old Garnet Hill catalog and a cough drop on it. When I go to bed I bring with me the book I am reading during the day. Right now it is the British edition of Sally Rooney’s brilliant, enigmatic new novel, “Normal People.”
I own all of Malcolm’s books and will always mourn that I never met her, no matter how much she might have disapproved of me.
We have added two news-feeds, for Artificial Intelligence and Universal Design for Learning (UDL), to go with our feeds on Social Media Policy and Kwantlen Polytechnic University, and we’ve significantly expanded our resources list at the top of the page. We’re grateful to everybody for checking in as often as you do.
Our friend Jonathan Mayhew has been wondering whether the “balanced literacy” approach to teaching reading has neglected fundamental ways the brain apprehends and organizes sound itself, to the detriment of a generation or more of young would-be readers.
Professors of education are not neuroscientists, but perhaps they should be. … I’m thinking that language acquisition begins with prosody, and so little children are very good already at sound.
Mayhew’s post reminded me why I would not be watching – that is, listening to – tonight’s State of the Union address by President Biden. No spoken set-piece is less mellifluous than this thing, its aural rhythms undermined by round after round of applause as if at the point of a cattle prod. (I might have it on silent, though, to catch any actual “action.”)
We have discussed our friend Clarissa‘s opinions on American academia and other topics in the past. She is an Hispanic Studies professor at a midwestern public university whose blog is always vividly written (and is contentious by design, I would say). I asked her if we could quote a truly startling post from today – in its dystopian entirety. Her title tops this post.
Today we received a document that describes the new procedure for creating an academic budget. It’s written in the most atrocious bureaucratese and lists 17 (seventeen) additional meetings on top of the ones in the already existing procedure. Every meeting is described not only in terms of the date, attendees and action items but also a list of feelings (yes, feelings) people should experience after each meeting.
Example. “February 16, 2023. We leave the meeting with a sense of confidence in our capacity to improve the budget and a sense of excitement regarding the new strategic budgeting process.”
There is a separate column for these feelings. Every sentence in it starts with “We leave the meeting with a sense of.” Please note that these meetings haven’t happened yet. These are future meetings. But the feelings they are supposed to inspire have already been pre-planned. And put down in writing by people who lack any sense of humor.
I know everybody is already tired of me bringing up the USSR but I’ll say it again. We weren’t this stupid in the USSR. The pre-planned feelings worked only until Stalin’s death. Once there were no mass executions, nobody took pre-planned feelings seriously.
This is a long, very detailed document. 5 pages, single-spaced, 10 pt font. Somebody got paid actual money to write this unreadable, moronic garbage. It was approved by the administration. What is wrong with us that we let this happen?
The “pre-planned feelings” document refers to people, and again, I quote, as “folks especially faculty.” This “folks” is so grating because it aims to create a folksy, conversational mood in a situation where the guiding idea of the project is to get rid of as many workers as possible.
The concluding section titled “Opportunities and Threats” ends with the following statement: “Identify which departments have more faculty than can be justified.” What’s going to happen with these unjustified professors – or “folks” – is never explained. …
We must lay the groundwork for a deeper and wider change in culture—one in which eventually all folks (faculty in particular) realize that their work in the classroom has some ‘economic’/fiscal/financial aspect/consequence.
After which “we will leave the meeting with a sense of” bla-bla.
The textbook definition of neoliberalism, by the way, is “markets in everything.” What does that mean? See above for the perfect example.
As a cartoonist of the early 21st century I am the last of the Mohicans, a direct heir of the first known artists: the Neolithic people whose cave paintings of hunters were discovered by a French boy who tumbled through a hole in the ground in Nazi-occupied France. Drawing for a living under late capitalism is a challenge. Selling political drawings in an era when humor and satire has all but vanished from popular culture is even harder. (Charles Schulz, Rudy Ray Moore, Carol Burnett, Flip Wilson, Dave Barry, Art Buchwald, “Weird Al” Yankovic: None would find work if they were starting out today.) When I began drawing editorial cartoons for syndication three decades ago, there were hundreds of us. Today there are an even dozen. I am 59 years old and I am one of the younger ones.
The cruel gods of artificial intelligence have targeted me and my kind for termination. AI-based text-to-image generators are the latest technological leap that exploitative entrepreneurs are using to make a mockery of copyright and trademark, the fundamental legal protections of intellectual property in the United States. From a user standpoint, the interface is simple. You go to a website and enter some terms, say: “Abraham Lincoln painted by Picasso.” A few seconds later, if the data set is big enough and the algorithms smart enough, out pops a picture representing your request. It’s not exactly cool. But it’s interesting. …
Unless Congress acts quickly and decisively, creative people in every field you can think of will be unable to distinguish their work from computer-generated knockoffs, radically curtailing their ability to command payment for their labor — and to lift the human spirit.
Here via Twitter thread is his 2022 year-ending “complete statement about AI text-to-image generators”:
From the great Bryan Garner:
You can buy the new, 5th edition of Garner’s Modern English Usage here.