The Berkeley Innovation Group

We transform organizations using an experiential learning approach, infusing design-thinking capabilities throughout the company's culture.

Learn more about our online design-thinking course here: big-splash.theberkeleyinnovationgroup.com/brochure…

Our clients engage us because they say we make them feel “comfortable,” with innovation. They describe our co-creation process as “collaborative,” “refreshing,” and “engaging,” while calling their outcomes “eye-opening,” “culture-changing,” and “an outsized return on investment.” Contact us to learn more!


The Berkeley Innovation Group

There’s a lot of discussion around AI these days, some of it optimistic, some of it alarmist.

In practice, most organizations and professionals are somewhere in between: curious, eager, but uncertain about how to integrate it effectively into their work.

The challenge isn’t technical skill. It’s understanding how AI can support real decisions, workflows, and creativity without undermining human judgment. I’ve observed teams spending hours experimenting with tools, only to end up more confused than when they started.

The real opportunity lies in practical application: identifying where AI can genuinely add value, testing it thoughtfully, and building confidence through repeated, guided experience.

AI is a partner. But to leverage it well, we need to be deliberate, reflective, and willing to engage with it realistically.

www.biginnovates.com

23 hours ago | [YT] | 0

The Berkeley Innovation Group

Leaders are often the last to pause.

We talk about vision, strategy, execution but rarely about energy, clarity, and presence.

Here’s the hard truth I’ve seen repeatedly:

» Skipping downtime does not make you productive, it makes you reactive.
» Endless availability does not signal commitment, it signals a lack of boundaries.
» Burning midnight oil might get tasks done, but it erodes judgment.

Your team notices. Not in words, but in the subtle cues of fatigue, short tempers, and blurred priorities.

Leadership sets the rhythm. If you are running on empty, your organization feels it before anyone says a word.

Work-life balance is not about rigid schedules.
It’s about intentional pauses that protect your clarity, energy, and decision-making.

Taking time to think without interruptions sharpens strategy.
Walking away from the inbox sparks creativity.
Saying “no” creates space for what actually matters.

The most effective leaders I know don’t work more.
They work better.
They build systems that survive when they step back.

Your next leadership move might not be another meeting or report.
It might be protecting time for yourself and modeling it for your team.

www.biginnovates.com

2 days ago | [YT] | 0

The Berkeley Innovation Group

Every week there is a new headline about AI in education.

AI tutors outperforming students.
AI grading essays at scale.
AI “personalizing” learning paths.

The coverage makes it sound like classrooms are on the verge of automation.
What rarely gets discussed is what is actually happening on the ground.

Here is the biggest misconception I see right now:
The challenge in AI and education is not student misuse.
It is adult uncertainty.

Schools are reacting to tools faster than they are clarifying intent.
Teachers are told to “use AI responsibly” without a shared definition of responsibility.
Students are warned about cheating while watching adults quietly experiment without guidance.

Leaders invest in platforms before answering a basic question:
“𝘞𝘩𝘢𝘵 𝘬𝘪𝘯𝘥 𝘰𝘧 𝘵𝘩𝘪𝘯𝘬𝘪𝘯𝘨 𝘥𝘰 𝘸𝘦 𝘸𝘢𝘯𝘵 𝘵𝘰 𝘱𝘳𝘰𝘵𝘦𝘤𝘵?”

AI exposes a truth many systems were already avoiding:
We never fully aligned on what learning looks like when answers are abundant.
The real work sits upstream from policy and tools.

It looks like:

▪Deciding which cognitive skills matter more because AI exists
▪Teaching judgment, not compliance
▪Helping educators feel safe saying “I am learning this too”

AI does not remove the need for teachers.
It raises the bar for leadership.

Right now, the loudest debates focus on control.
The quieter issue is capability.
And capability grows through clarity, not fear.


www.biginnovates.com

3 days ago | [YT] | 0

The Berkeley Innovation Group

𝐖𝐞 𝐚𝐫𝐞 𝐨𝐛𝐬𝐞𝐬𝐬𝐞𝐝 𝐰𝐢𝐭𝐡 𝐀𝐈 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲, 𝐛𝐮𝐭 𝐰𝐞 𝐚𝐫𝐞 𝐢𝐠𝐧𝐨𝐫𝐢𝐧𝐠 𝐀𝐈 𝐚𝐭𝐫𝐨𝐩𝐡𝐲.

The loudest conversations in businesses today are about data privacy and hallucinations. While those are valid technical risks, they aren’t the ones that will erode your competitive advantage over the next decade.

The quieter, more insidious risk is the death of the "First Pass."

When AI becomes the default starting point for every task, teams begin to bypass the most critical cognitive work:

» 𝗛𝘆𝗽𝗼𝘁𝗵𝗲𝘀𝗶𝘀 𝗙𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: Does this solve the right problem?
» 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗙𝗿𝗮𝗺𝗶𝗻𝗴: Are we asking the right questions, or just the easiest ones?
» 𝗖𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁 𝗠𝗮𝗽𝗽𝗶𝗻𝗴: Identifying what must be true before we start.

As a result: 𝗔 𝘀𝗹𝗼𝘄 𝗲𝗿𝗼𝘀𝗶𝗼𝗻 𝗼𝗳 𝗷𝘂𝗱𝗴𝗺𝗲𝗻𝘁.

This is especially dangerous for junior talent. If a practitioner never learns to navigate the "messy middle" of a blank page, they never develop the intuition required to lead. We aren't just losing drafts; we’re losing the mental models that build experts.


Strong AI systems require Intentional Friction.
Ethical AI use isn't just about compliance or "not leaking data."
It’s about preserving human agency inside the workflow.

These three-step protocols keep judgment sharp:

1️⃣ 𝗗𝗿𝗮𝗳𝘁 𝗳𝗶𝗿𝘀𝘁, 𝘁𝗵𝗲𝗻 𝗔𝗜: Anchor your own perspective before the algorithm "smoothes" it out.

2️⃣ 𝗗𝗲𝗰𝗶𝗱𝗲 𝗳𝗶𝗿𝘀𝘁, 𝘁𝗵𝗲𝗻 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲: Form a conviction, then use AI to stress-test your logic.

3️⃣ 𝗧𝗵𝗶𝗻𝗸 𝗮𝗹𝗼𝗻𝗲, 𝘁𝗵𝗲𝗻 𝘀𝘆𝗻𝘁𝗵𝗲𝘀𝗶𝘇𝗲: Protect your unique "spark" before merging it with the statistical average of an LLM.

Remember:

Efficiency is a metric.
Judgment is an asset.
Don't sacrifice the latter for the former.

How is your team maintaining "intentional friction" in their AI workflows?

#JeffEyet #TheBerkeleyInnovationGroup #AI #HumanCenteredAI #Leadership #InnovationStrategy #FutureOfWork #CriticalThinking

1 week ago | [YT] | 0

The Berkeley Innovation Group

Many adults tell kids to “use AI responsibly” without being able to explain what that actually means.

Kids notice the gap immediately.

When guidance feels vague or performative, young people create their own rules.

Those rules usually optimize for speed, convenience, and getting the task done not for thinking, learning, or integrity.

That is not a failure of kids.
That is a failure of adult leadership.

AI literacy is not tool mastery.
It is judgment under uncertainty.

It is knowing when to use assistance, when to struggle productively, when to question an output, and when accuracy, ethics, or originality matter more than efficiency.

That kind of judgment does not come from rules posted on a wall.
It comes from modeled behavior.

A real example:

A middle school student uses an AI tool to summarize a chapter for a history quiz. The summary is clean, confident and wrong on two key facts. The student memorizes it anyway, fails the quiz, and shrugs: “The AI said it.”

No one ever showed them how to verify, question, or slow down. They were only told to “use it responsibly.”

Responsible use was never defined.
So speed became the default value.

The turning point starts when adults are willing to say:
“I am learning this too. Let’s figure it out together.”

That moment shifts the relationship.
It invites curiosity instead of compliance.
It creates space for judgment instead of shortcuts.
It builds trust instead of surveillance.

And trust shapes behavior far more effectively than any policy, filter, or rule ever will.

If we want young people to build healthy AI habits, adults have to model thoughtful use not pretend mastery.

www.biginnovates.com

#JeffEyet #TheBerkeleyInnovationGroup #AI #ArtificialIntelligence #hcai

1 week ago | [YT] | 0

The Berkeley Innovation Group

Most learning breakthroughs come after confusion.

AI removes confusion fast.
That is both its strength and its risk.

When students bypass uncertainty too quickly, they miss the muscle-building part of learning: 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗮𝗺𝗯𝗶𝗴𝘂𝗶𝘁𝘆, 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗶𝗱𝗲𝗮𝘀, 𝗯𝗲𝗶𝗻𝗴 𝘄𝗿𝗼𝗻𝗴.

The question for education is no longer 𝗦𝗵𝗼𝘂𝗹𝗱 𝗸𝗶𝗱𝘀 𝘂𝘀𝗲 𝗔𝗜?
It is 𝗪𝗵𝗲𝗿𝗲 𝘀𝗵𝗼𝘂𝗹𝗱 𝗳𝗿𝗶𝗰𝘁𝗶𝗼𝗻 𝗿𝗲𝗺𝗮𝗶𝗻?

Remove all struggle, and you remove growth. That’s the design challenge educators and parents face today.

🟠 Let’s explore what “productive friction” looks like in your classroom or home.

DM us to start the conversation.

www.biginnovates.com

#JeffEyet #TheBerkeleyInnovationGroup #AI #ArtificialIntelligence #hcai

1 week ago (edited) | [YT] | 0

The Berkeley Innovation Group

I increasingly see teams using:
ChatGPT for thinking
Notion AI for documentation
Copilot for execution
A fourth tool for “experiments”

No shared norms.
No system of record.
No clarity on where decisions live.

The risk is not inefficiency.
It is a fragmented truth.

When AI outputs differ across tools, teams quietly lose confidence in:
Data accuracy
Version control
Accountability

This is not a tooling problem.
It is a systems design problem.

🟠 If you’re exploring ways to align AI tools and maintain a clear system of record, feel free to start a conversation.

www.biginnovates.com

1 week ago | [YT] | 0

The Berkeley Innovation Group

“𝘐𝘯 𝘢 𝘸𝘰𝘳𝘭𝘥 𝘴𝘢𝘵𝘶𝘳𝘢𝘵𝘦𝘥 𝘸𝘪𝘵𝘩 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯, 𝘵𝘩𝘦 𝘳𝘢𝘳𝘦𝘴𝘵 𝘭𝘦𝘢𝘥𝘦𝘳𝘴𝘩𝘪𝘱 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 𝘪𝘴 𝘵𝘩𝘦 𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘵𝘰 𝘳𝘦𝘥𝘶𝘤𝘦 𝘤𝘰𝘮𝘱𝘭𝘦𝘹𝘪𝘵𝘺 𝘪𝘯𝘵𝘰 𝘢𝘤𝘵𝘪𝘰𝘯𝘢𝘣𝘭𝘦 𝘪𝘯𝘴𝘪𝘨𝘩𝘵.”

Clarity is not about oversimplification.
It is the disciplined distillation of competing priorities into coherent judgment.

Leaders who master clarity:

-Identify the signal amidst the noise
-Communicate with precision, leaving little ambiguity about intent
-Prioritize decisively, even when trade-offs are uncomfortable
-Align teams’ cognitive and operational energy toward meaningful outcomes

Focus is not control, it is the architecture of understanding that enables confident, autonomous action.

🟠 I work with leaders who seek to translate complex environments into clear, strategic action building teams that operate with confidence and judgment.

www.biginnovates.com

3 weeks ago | [YT] | 1

The Berkeley Innovation Group

Most AI risk frameworks focus on models.
The real risk lives in decisions.

Organizations often ask:
“Is this AI accurate?”
“Is it compliant?”
“Is it secure?”

Those are necessary but insufficient.

The harder questions:
– Who owns the outcome when AI influences a decision?
– How do we detect slow, compounding errors?
– What happens when speed outpaces reflection?

AI risk is not just technical failure.
It’s organizational overconfidence.

The safest AI systems are embedded in cultures that:
– Encourage dissent
– Track decision lineage
– Revisit assumptions
– Slow down when stakes rise

If your organization is serious about AI governance beyond checklists, let’s design risk systems that match real-world complexity.

www.biginnovates.com

3 weeks ago | [YT] | 0

The Berkeley Innovation Group

“𝘓𝘦𝘢𝘥𝘦𝘳𝘴𝘩𝘪𝘱 𝘪𝘴 𝘮𝘦𝘢𝘴𝘶𝘳𝘦𝘥 𝘭𝘦𝘴𝘴 𝘣𝘺 𝘵𝘩𝘦 𝘢𝘣𝘴𝘦𝘯𝘤𝘦 𝘰𝘧 𝘧𝘢𝘪𝘭𝘶𝘳𝘦 𝘢𝘯𝘥 𝘮𝘰𝘳𝘦 𝘣𝘺 𝘵𝘩𝘦 𝘥𝘪𝘴𝘤𝘦𝘳𝘯𝘮𝘦𝘯𝘵 𝘵𝘰 𝘢𝘤𝘵 𝘥𝘦𝘤𝘪𝘴𝘪𝘷𝘦𝘭𝘺 𝘶𝘯𝘥𝘦𝘳 𝘶𝘯𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘵𝘺.”

Decisions that matter rarely present themselves clearly. The most consequential choices are ambiguous, complex, and uncomfortable.

True leaders cultivate:

✔ 𝗗𝗶𝘀𝗰𝗶𝗽𝗹𝗶𝗻𝗲𝗱 𝗰𝗼𝘂𝗿𝗮𝗴𝗲: acting when stakes are high, guided by principle rather than impulse

✔ 𝗢𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 𝗼𝗳 𝗼𝘂𝘁𝗰𝗼𝗺𝗲𝘀: acknowledging responsibility when decisions do not yield success

✔ 𝗥𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝘃𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴: turning missteps into frameworks for better judgment

Courage, in leadership, is not flamboyant bravado.
It is intentional action informed by insight and accountability.

🟠 If you are navigating high-stakes ambiguity and want to refine a leadership practice grounded in judgment and disciplined risk-taking, I work with leaders to convert complexity into clarity.

➕ Follow Jeff Eyet 🔑✨ for practical strategies on AI and business growth.
🔗 Visit www.biginnovates.com to learn more.

3 weeks ago | [YT] | 0