Introducing HBS: The Higgins-Berger Scale of Generative AI Ethics
May 7, 2026
Introducing HBS: The Higgins-Berger Scale of Generative AI Ethics
Doug Berger: [00:00:00] Welcome to the latest installment of Brand of Brothers. I'm Doug.
Johnny Diggz: And I'm Johnny. Today we're introducing the Higgins-Berger Scale of AI Ethics.
Doug Berger: All right, let's get to it. And we're back. So last time, Doug, we were talking about ethics and generative AI use, specifically in the creative agency space. Um, it really applies- Mm-hmm ... to anyone, a business owner, a CMO, anybody who works in a marketing department, like really needs- Or
agency.
Johnny Diggz: Yep ... agency. Even boards of directors, right?
So-
Doug Berger: I, I mean, I think that it has numerous applications, but we primarily created it with agencies in mind and graphic designers in mind, that have questions about when it is ethically appropriate to [00:01:00] utilize generative AI.
Johnny Diggz: So we built a scale. Yeah, I like that you named it the Higgins Burger Scale, uh, the HBS.
Doug Berger: It is the HBS. The HBS. Uh- And, and, and why, why did we name it that?
Johnny Diggz: Uh. Um, you named it after the God particle.
Doug Berger: That's right, the Higgs
Johnny Diggz: boson particle. The Higgs boson. Yes. There's actually an AI, uh, startup called Higgs Field, which is the, uh, where the Higgs particles live. I, I don't even know what the Higgs field is, but it has something to do with the Higgs boson.
But, uh, that's for a different podcast.
Doug Berger: Yes, it is.
Johnny Diggz: Um- And
Doug Berger: probably not Brand of Brothers.
Johnny Diggz: Not Brand of Brothers. Um, so we, we, we put together this framework. I, I took the first stab and we, we've gone back and forth. We even roped in a, a buddy of ours that works in a totally different field, um, and got some input from him.
Mm-hmm. Um, and, uh, after iterating on, on a few times, we [00:02:00] did the first draft, and then we did realize that we needed to address some other things. We talked about, um, what to include in, in our scale, how-
Doug Berger: And what to exclude.
Johnny Diggz: Oh, what to exclude. For example, um, we made a conscious decision to not include, uh, like environmental impacts, right?
Right.
Doug Berger: Yeah. It- listen, we know that there will always be ethical conundrums. Um, if we let ethic, uh, certain ethics, uh, prevent us from, from progress, then... A- a- and, and I should be specific. Like, it, it, obviously we don't want the environment to be destroyed, but we also understand that industry tends to degrade environment, and then there are course corrections.
And, and hopefully we're witnessing that as, as we move forward. And plus, the, the computing, uh, power and everything becomes, uh, more economical as time goes [00:03:00] on, um, but at the scale at which it's working, it, it, there is significant economic or, uh, environmental impact.
Johnny Diggz: Right.
Doug Berger: Um, so- And
Johnny Diggz: just, I, I, not to gloss over that because I, I'd like to, to, you know, I- I get a lot of questions because I, you know, other people see me as a computer guy.
Um-
Doug Berger: Is it the glasses?
Johnny Diggz: It's probably the, the glasses and the, and all of the posts I make about computer stuff. Um- Oh, that part. Yeah. Yeah. So the... People ask about this environmental impact, which has made a lot of no- no- noise and no- news, both of those. Yes. Uh, but a lot of them... Okay, there's two separate things.
There's the training data, which does require a tremendous amount of power, training these models. Mm-hmm. Um, but the, uh... when you think of it in terms of the equivalent power of what it costs Amazon to run Amazon's website, you know? Or all of the hosting that [00:04:00] Microsoft Azure does, um, it, it is a, is a piece of the overall IT infrastructure that runs the world these days.
Doug Berger: And then we s- have to delve into parts like, well, Amazon, for example, is investing in for- in, in, like, planting forests, right? As, as a way to kind of offset their, their carbon- Right ... uh, footprint. So, you know, does that make it ethically okay to do what they're doing? So should- ... should we bother going down this rabbit hole?
That... Yeah. I don't think so. Yeah, yeah. I, I think what's important are really the five categories that we landed on, um, with, in my, in my personal opinion, with the human in the loop being the most important. Sure. However, there's intent and so on, um, that, you know, can get nefarious pretty quickly. But people who are acting, uh, a- as I mentioned in our previous episode, who are, are [00:05:00] acting maliciously, they don't care about the ethics.
Johnny Diggz: Right, right. So there's a... We do, while we do have a red zone in our, uh, in our scale, um, where that th- red is, is, is defined as, you know, ethically dubious, s- likely scam, you're breaking laws and hurting people- Right ... intentionally, right?
Doug Berger: Yeah.
Johnny Diggz: So, um, but those people aren't gonna use this scale.
Doug Berger: So really quickly, i- if we can dive into the five categories of the scale-
Johnny Diggz: Okay
Doug Berger: and the, the five potential outcomes. So, um, do you want me to, to dive in? Yeah, let's hear it. Okay. Let's hear it. So, so I created a ridiculous acronym to help us remember.
Johnny Diggz: Oh, I'm ready to hear this one. What is it?
Doug Berger: Okay. So, all right, all right. If... Forgive me, you're putting... I, I'm putting myself on the spot. So it is DUG-
and DIGS intend to party. DUG
Johnny Diggz: and- So- ... DIGS [00:06:00] intend to party. So it's- So, so the- ... D-D-I-T-P?
Doug Berger: Yeah.
Johnny Diggz: Okay.
Doug Berger: So the, the first D- ... um, is, uh- Yeah. So The first D? It, it is data use.
Johnny Diggz: Use. Okay.
Doug Berger: Okay.
Johnny Diggz: So this is
Doug Berger: the- So it's data use and privacy. Okay. And, and so this is about the ethical sourcing of data, and also, you know, not violating people's privacy.
It's, it, it, it delves into that, right? Mm-hmm. So, um, then the, the second D is displacement, and it's specifically about displacement of humans. Um, and so this is are we, uh, replacing a person with AI, or are we augmenting their abilities with AI? Uh, so obviously, uh, if we're augmenting their abilities with AI, that's gonna be a lower score, which is [00:07:00] good, and if it's displacing a person, um, intentionally, uh, then that's bad.
So speaking of intentional, that takes us to, uh- Intent ... intent.
Johnny Diggz: Intent to party.
Doug Berger: A- and so- Yeah ... uh, so with regard to intent, are we intending to inform and educate, or are we t- intending to mislead and, and, and dupe, right?
Johnny Diggz: Um, intent was one that we sort of added, I think, l- a- after we did the initial four. I think that was one, if I remember the, the, the, the evolution-
Doug Berger: Y- you're asking me to remember something- Yeah, yeah
Johnny Diggz: I c- I don't
Doug Berger: even know what
Johnny Diggz: I had for lunch today I think, I think intent, um, intent really, uh, was, was something... 'Cause I think it's important, um- Oh, yeah ... and, and people don't really think about it too much, is, you know, is it, are you intentionally deceiving people? Are you [00:08:00] intentionally trying to do good?
Like, if you're using AI, and you, uh, maybe it's not s- y- you know, to generate a flyer, um, for, uh, for your, your yard sale, okay?
Doug Berger: Mm-hmm.
Johnny Diggz: Um, and the AI was maybe, you know, maybe you're using one that wasn't ethically sourced data. Maybe they, it prints it out in fonts that aren't, aren't licensed. Um, you know, and, uh, maybe it uses an image that looks like your neighbor you know?
So, but your intent is just to advertise your garage sale. Right. So that should factor in. It wasn't like you're trying to out there to deceive people or- Mm-hmm ... you didn't intend to include your neighbor's picture.
Doug Berger: You know, it's not like you're, you're, it, having a yard sale that- Y- let's say you're just selling old clothes and knick-knacks Right It, it's not [00:09:00] like you're, you're putting diamonds and jewels and, uh, and family heirlooms on the market.
Right. So I, I think it, it, it's, you know- Yeah ... a genuine effort that, yes, it's displacing a person. Is it bad? It's not great, but you know, you're making a neighborhood flyer. Does that really require a professional's involvement?
Johnny Diggz: Right. Now, and, and also there's a le- level of accountability there too, right?
So, so if you, if y- you can just say, "AI made this," and you r- it removes your li- your accountability.
Doug Berger: Well, I think y- well, now you're getting into transparency, right? Sure. So there's the D, there're D, we already talked about those two, and then the I, and now we're on to the T, which is the transparency. And so that's about disclosure.
Um, and, and you know- It's not an D ... it, it, it I mean, if you wanna get there, you can. Um, it, so it, it's really comes down to whether or it, [00:10:00] it, whether or not you're telling people that you used AI to create what you're doing, which in many instances, not really important. There are people that believe that you need to just be, you know, hammer it home all the time, anytime you're using AI.
Johnny Diggz: Radical disclosure.
Doug Berger: Right.
Johnny Diggz: Yeah.
Doug Berger: Um, but the reality is that you don't have to tell people what tools you used to build the building.
Johnny Diggz: And I would imagine that, uh, that, you know, you don't, you don't disclose that when you make a, a design for a client, you don't say, "We used Illustrator," or, "We used..." You know, w- like, you don't go through the process.
Excuse me. You typically, um, just hand them the end result. Right. And so if AI is a tool in your, in your, in your-
Doug Berger: It's just like using InDesign or Illustrator or- Right ... Photoshop.
Johnny Diggz: And [00:11:00] many times it is actually using Photoshop-
Doug Berger: Right ...
Johnny Diggz: because the AI tool is inside of Photoshop.
Doug Berger: Right. So that takes us through transparency, and so then the, the last, uh, category is potential for harm.
So before we go into potential for harm, let's go ahead and recap really quickly. So, uh, dug and digs intend to party, so that is data use, right?
Johnny Diggz: Yep.
Doug Berger: Um, that is, uh, help me out here. Uh, f- we're gonna skip that one. I- I- intent, intent, transparency- Displacement ... and displacement. So then, uh, that, that then takes us to potential for harm.
Johnny Diggz: And why would somebody- use AI nefariously, Doug?
Doug Berger: So it, it's ordinarily probably to [00:12:00] mislead, misrepresent, and then to take money. Um, and, you know, we, we've seen it personally, we've seen it, uh, a- allegorically, where, uh, it is possible to pretend to be a something that you are not, and you can utilize AI to do this incredibly well.
Johnny Diggz: So I f- I'd found, you know, an interesting use case that I hadn't considered, because there's a lot of talk about deepfakes, and this falls in, clearly into the potential for harm.
Doug Berger: Mm-hmm.
Johnny Diggz: Um, and w- what, w- the, the threat that I had understood was that someone's gonna use a deepfake to pretend that they're, um, you know, the president, and, and something's gonna go viral where the president- Mm-hmm
said something that he didn't actually say. Mm-hmm. And it's gonna cause, you know, it, and it could cause conflict. We've seen cases of this happening. Mm-hmm. We've even [00:13:00] seen our president share some of this generated, AI-generated stuff. Mm-hmm. Um, but what... That is one type of, of s- potential threat with AI.
Another one that I saw recently that I didn't e- that I didn't see coming was, um, taking, uh, YouTube personalities. These are people who, um, y- you know, post regularly, and- Yeah ... s- sometimes they're r- uh, political podcasters and talking, you know, influencers, if you will. Yep. And, um, so c- s- you know, nefarious people are using AI to regenerate content that looks like them- Sure
sounds like them, is talking about the same things as them. And right now, th- these, these, these fake AI personalities, um, aren't really conflicting with what the podcaster says. They're actually mirroring it. But what they're doing [00:14:00] is diluting that person's ability to make money and be trusted, because all these fake versions of them are popping up.
And it wouldn't t- be, it would be- Yeah ... very easy for whoever was doing this to slightly switch the messaging And tweak it into a different direction that either alienates their audience or attracts a new kind of audience that they're, that they don't want.
Doug Berger: I mean, I think it's safe to say that if it, that that's already coming.
Johnny Diggz: Yeah.
Doug Berger: Right?
Johnny Diggz: It's happening.
Doug Berger: It's happening.
Johnny Diggz: Yeah,
Doug Berger: yeah, yeah. A- and so i- it's, it's- And- It's a complicated area. Obviously, you know, uh, providers that, that make this possible, um, should probably begin to figure out how to prevent these situations from happening. Yeah. Um, we already experienced- But it's,
Johnny Diggz: but, but you, you can, I mean, you can install a lot of these models locally and there's- Sure
there's nothing, you know- The
Doug Berger: guardrails are gone.
Johnny Diggz: Yeah, yeah.
Doug Berger: Yeah. It's crazy. But yeah, I, I, I... So [00:15:00] at the end of the day, we have this, uh, Higgins Burger scale, which helps us ascertain, uh, where we are on an ethical level, and so that scale is on a 0 to 10 plus level, right? So 0 is blue. Blue means that everything that you're doing from an ethical perspective is flawless.
Right? It, it is, it is immaculate.
Johnny Diggz: Beyond reproach.
Doug Berger: It's true. And, and when you use the HBS utility, you'll also notice that a number of the drop-downs have the, these extra components that kind of offset from your responses.
Johnny Diggz: Oh, like a modifier?
Doug Berger: Like a modifier. Okay. I think exactly a modifier. Like, like a
Johnny Diggz: modifier.
Uh,
Doug Berger: and-
Johnny Diggz: This was a, this was also an evolution. [00:16:00] Yep. We, we, we realized that there were some use cases. Well, in this partic- you know, 'cause we, w- w- we had to put this thing through its paces and, and come up with use cases.
Doug Berger: Yeah, like a great modifier would be that, let's say that you're creating an animation utilizing a, an AI platform, and you upload the first frame and the last frame, and it interpolates the rest and creates that animation.
But you don't know where that platform sourced that information, but you own-
Johnny Diggz: Both
Doug Berger: frames ... the, the, the before and after- Right ... frames.
Johnny Diggz: Yeah.
Doug Berger: So that, in and of itself, because you own it or you licensed it, that's a modifier. Sure. So all of a sudden you go from a gray are- a gray zone of, of, of ethical use because you don't know how-
Johnny Diggz: How Kling or one of those tools, you know-
Doug Berger: Right.
Well- ... [00:17:00] yeah ... let's careful about who we name. Yeah,
Johnny Diggz: yeah. But-
Doug Berger: No,
Johnny Diggz: but it, yeah, I-
Doug Berger: But, but yeah ...
Johnny Diggz: I don't, yeah, I, I- It- ... but the, I do wanna name them because I don't, I don't know, first of all- Right ... how Kling gets their data. Or
Doug Berger: Sora, right?
Johnny Diggz: Or Sora or Gemini or, I mean- Yeah. The only one that I'm, I'm pretty sure of is Adobe.
Doug Berger: Right.
Johnny Diggz: Um, but the rest of 'em, there's a little bit of a question mark about where they got the images to train their image models or all the text. I mean, we talked about lawsuits with The New York Times and OpenAI. Yep. And, and, um, and so, uh, th- those, those, uh, sort of modifiers help mitigate what could be, what...
You know, really it's just, it's, it's, it's, it's really recognizing that this is a, a scale and, and that, that there's a, there, there are little steps along the way that can change, oh, well, if you used an original [00:18:00] image that you do have a license to or a photograph that you took as the starting point-
Doug Berger: Yeah
Johnny Diggz: you know, it, the, you, that should count for something.
Doug Berger: And the same is true with, you know, uh, mitigating, uh, displacement, human displacement, right? And so did you hire a person in this process? That can automatically, you know, bring your score down, and the goal of course is to be as close to zero as possible.
So that's in the blue zone for a zero when you're immaculate. However, then there's the typical operating area, which is green zone. The green zone is completely acceptable. Um, it, it means that, you know, you're doing your best to make sure that you're not displacing a person. You're doing your best to make sure that you're using clean data.
You're making sure that the intent is clear and reasonable. And speaking of being clear, you're transparent [00:19:00] about your AI use where necessary, right? Like if you're educating people and, uh, uh, and you're generating information from, uh, from OpenAI for example, and you're not disclosing that there's a potential that, it, there could be flawed information- Yeah
that's problematic, right? Um, and then there's the, the, the human capital component and then of course the in- the actual, uh, party. The- So, so wh- when it, when it comes to the scale- We wanna make sure that we're, we're in that green zone. However, there are certain answers that begin to tilt the scale in a way where it's important for the individual using the tool to go, "Okay, based on my responses, we need to make these adjustments in order to be dialed back into the green [00:20:00] zone," which is really what it's all about.
Any, any moment in time that a designer or a creative utilizes the tool and goes, "Oh, crap, I just realized that in order to solve this problem, I need to make sure that the person who was doing the execution is now re-skilled to do the prompt engineering and learn how to become a creative director, for example, as opposed to the designer that they, they are today."
Johnny Diggz: Yeah. In, in, um, in audio that too. You, you bec- instead of, uh, in addition to becoming... If you're a musician and you're using these tools to make audio, um, like Suno or one of those, you're becoming more of a producer. Right. Right? You know, you may be pl- still playing, uh, instruments to train the AI for the th- the song you want or whatever, but, um, but you get back a full, uh, arranged [00:21:00] composition.
Doug Berger: Right.
Johnny Diggz: And, um, and then it's up to you to sort of decon- p- you know, to deconstruct that, slice it up and make it m- more, uh, your own. But, um, but someone who, for example, uses a tool like Suno or Udio, generates a song, posts it on Spotify with no human interaction, that's the stuff that, that, that's literally the definition of slop.
Doug Berger: Yep. Right? That's your AI slop. Exactly, yeah. Yeah. And, and the prolifera- the pl- ah, that's easy for me to say. The proliferation of AI slop is, is so incredible, um, that frankly, it's, it's benefiting humans and their abilities to be able to pinpoint when something is AI. And n- you may not consciously be aware of it, but you are subconsciously aware that you're looking at garbage- Right
or listening to garbage and, you know, the, the, there's [00:22:00] some entertaining stuff out there. Sure. Also, I, I... Look, it, they, I, I, I got r- r- uh, br- wrung in, uh, the other day when I saw something that I found rather entertaining, and then when the video looped, I started noticing that there were these strange aspects that were happening.
Mm. And I'm like, that was some entertaining slop. But- I also took the initiative to make sure that my algorithm knew that I don't want that.
Johnny Diggz: Sure, sure. And that, and, and I think that the, you're right. A lot of humans are getting better at recognizing the difference. Um, it- i- it's, it's not necessarily conscious, but you know, the telltale signs that were there a year ago of, like, multiple fingers and, you know, uh, m- all right, the one that re- comes to mind is there was a AI video, one of the early AI viral videos was, uh, was Will Smith eating some spaghetti.
Doug Berger: Oh,
Johnny Diggz: yeah. [00:23:00] Right? So, um, they're getting better.
Doug Berger: Sure.
Johnny Diggz: And, um, it's probably harder to make something like what, that Will Smith today because it doesn't want to make it look like that. They've got, they add more guardrails. Um, you know, there's about, a lot of controversy with Grok in recent, about, about, um, being able to, you know, basically deep fake naked pictures of people.
Doug Berger: Mm-hmm.
Johnny Diggz: Um, you know, uploading clothed people and taking all their clothes off. Um, it really b- you know, and, and- And
Doug Berger: where did it get that data?
Johnny Diggz: Right. You
Doug Berger: know? So I- Yeah ... I think, I think it's pretty clear that, you know, on the ethics scale, that there are AI solutions that are more favorable than others.
Mm-hmm. Um, one of the things that I love about AI, a- a- about open AI specifically, um, is I can actually have it iterate from my own content [00:24:00] that I've created. And so it'll just focus in on the media that I've originated, so it's repurposing that instead of going out and trying to infuse it with outside, uh, resources that don't necessarily apply.
Johnny Diggz: Um, so we were at, uh, the yellow zone.
Doug Berger: Yeah, and then there's the orange zone and the red zone. Yeah, right. Yeah, yeah. So it, it, listen, it, when it comes to the scale, uh, it, it, it, it is a numeric scale, which m- really simplifies it because you're able to take something that's-
Johnny Diggz: Arguably oversimplifies it, but yeah.
Doug Berger: Yeah, it, it, well- Yeah ... it, it's taking a concept and quantifying it. Right? Attempting
Johnny Diggz: to quantify it,
yeah.
Doug Berger: A- and I f- I think it does a really great job because before we quantify it, we qualify it, right? Right. We create these different criteria [00:25:00] baked into each one that's then able to, uh, a- assess a, a, a, a numeric value.
Johnny Diggz: So I think we should get into, um, the tool, a little bit deeper dive, and maybe save that for next time.
Doug Berger: Sure.
Johnny Diggz: And, um, and also I'd love to- Um, you know, possibly keep this series going, talking about each of, you know, the, the Doug and Dig's Intent to Party, uh- ...
Doug Berger: categories. The
Johnny Diggz: HBS
Doug Berger: categories,
Johnny Diggz: yes. Yes, the HBS categories.
And, um, and really, and, and, and really going about it. And I, I love... You know, one of the reasons that we did end up publishing this is because we want feedback, and we want challenges, and we want people to give us pushback on, you know, "You didn't think about this," "You didn't think about that." And because I, I, I envision this as something that can evolve, something that potentially, you know, [00:26:00] uh, you know, business owners and board of directors and- Yep
ethics, you know-
Doug Berger: It- ...
Johnny Diggz: and advisors.
Doug Berger: And, and we'll make sure that in, in the show notes, that there is a link to the article- Mm-hmm ... uh, about the paper, then a link to the, the paper itself, and then, uh, of course, a link to the utility. So all of those will be in the show notes that, that, when we post this.
Johnny Diggz: Sounds good. Until next time, stay ethical.
Doug Berger: Thank you for tuning in to Brand of Brothers. Big thank you to our presenting sponsor, Remixed, the branding agency, along with production assistance from Johnny Diggz, Simon Jacobsohn, and me, Doug Berger. We can't forget music by PRO. Speaking of not forgetting, remember to do that like and subscribe thing and find us at BrandShowLive.
com and follow us on the socials at BrandShowLive.
Welcome back to Brand of Brothers with Doug Berger and Johnny Diggz, where branding, marketing, business, creativity, and emerging technology get unpacked with clarity, humor, and zero fluff. In this episode, we introduce the Higgins-Berger Scale of Generative AI Ethics (HBS), a practical framework designed to help agencies, marketers, creatives, business owners, and decision makers evaluate the ethical use of generative AI in real-world creative workflows.
As AI tools like ChatGPT, Midjourney, Sora, Adobe Firefly, Suno, Gemini, and other generative AI platforms continue reshaping the creative industry, businesses are struggling to answer one important question:
Just because AI can do something, does that mean it should?
🔥 In this episode:
• What the Higgins-Berger Scale (HBS) is and why it was created
• The five core categories used to evaluate ethical AI usage
• Why AI ethics should be measured on a spectrum instead of treated as binary
• The difference between ethical AI use and unethical AI exploitation
• Data sourcing, privacy concerns, and training model transparency
• Human displacement vs. human augmentation in AI-assisted workflows
• Why intent matters when evaluating generative AI content
• AI disclosure, transparency, and client communication
• The growing problem of AI-generated “slop” content
• Deepfakes, misinformation, synthetic influencers, and digital impersonation
• Ethical concerns surrounding image generation, voice cloning, and AI video
• How AI-generated content can dilute trust and authenticity online
• The role of prompt engineering versus true creative direction
• Why copyright, licensing, and ownership remain unresolved challenges
• The evolving role of designers, musicians, marketers, and creative professionals
• Why ethical frameworks matter even when bad actors ignore them
• How modifiers inside the HBS scoring system work
• The difference between malicious AI use and imperfect AI use
• Why the future of AI ethics requires ongoing evolution and public feedback
💡 Whether you are a marketing agency owner, graphic designer, content creator, entrepreneur, creative director, CMO, or simply trying to understand responsible AI adoption, this episode provides a practical framework for evaluating AI ethics in business, branding, marketing, and creative work.
🎧 Listen now to learn how to:
• Evaluate generative AI tools beyond hype and marketing claims
• Use AI responsibly in branding, marketing, and creative production
• Identify ethical risks in AI-generated content and automation
• Balance efficiency with originality, transparency, and creative integrity
• Build AI-assisted workflows without sacrificing human creativity
• Better understand the future of ethical AI in creative industries
🧠 Explore the Higgins-Berger Scale:
• Interactive HBS Utility: https://r3mx.com/hbs-interactive-utility/
• Full Higgins-Berger Scale Article: https://r3mx.com/the-higgins-berger-scale/
Presented by Remixed, the full service branding agency helping companies craft, launch, and grow powerful brands.
🎶 Music by PRO
📍 Visit us at BrandShowLive.com
📱 Follow along at @BrandShowLive on all socials
👍 Like, subscribe, and share to support the show and keep the conversations going