purveyor of enterprise multi-media content, recovering tech enthusiast
795 stories
·
3 followers

Don’t Listen to a Vendor about AI, Do the DevOps Redo

1 Comment
John Willis, one of the pioneers of the DevOps movement, talking about a DevOps redo

Don’t listen to a vendor about AI, says John Willis, a well-known technologist and author in the latest episode of The New Stack Makers.

“They’re going to tell you to buy the one size fits all,” Willis said. It’s like going back 30 to 40 years ago and saying, ‘Oh, don’t learn how to code Java, you’re not going to need it — here, buy this product.'”

Willis said that DevOps provides an example of how human capital solves problems, not products. The C-level crowd needs to learn how to manage the AI beast and then decide what to buy and not buy. They need a DevOps redo.

One of the pioneers of the DevOps movement, Willis said now is a time for a “DevOps redo.” It’s time to experiment and collaborate as companies did at the beginning of the DevOps movement.

“If you look at the patterns of DevOps, like the ones who invested early, some of the phenomenal banks that came out unbelievably successful by using a DevOps methodology,” Willis said. “They invested very early in the human capital. They said let’s get everybody on the same page, let’s run internal DevOps days.”

Just don’t let it sort of happen on its own and start buying products, Willis said. The minute you start buying products is the minute you enter a minefield of startups that will be gone soon enough or will get bought up by large companies.

Instead, people will need to learn how to manage their data using techniques such as retrieval augmentation, which provides ways to fine-tune a larger language model, for example, with a vector database.

It’s a cleansing process, Willis said. Organizations will need cleansing to create robust data pipelines that keep the LLMs from hallucinating or giving up code or data that a company would never want to let an LLM provide to someone. We’re talking about the danger of giving away code that makes a bank billions in revenues or the contract for a superstar athlete.

For a company of any scale, the coding gets fun again when done right for a company using LLMs at scale with some form of retrieval augmentation.

Getting it right means adding some governance to the retrieval augmentation model. “You know, some structuring, ‘can you do content moderation?'” Are you red-teaming the data? So these are the things I think will get really interesting that you’re not going to hear vendors tell you about necessarily; vendors are going to say, ‘We’ll just pop your product in our vector database.'”

The post Don’t Listen to a Vendor about AI, Do the DevOps Redo appeared first on The New Stack.

Read the whole story
jonwreed
7 days ago
reply
looks interesting
planes, trains and automobiles
Share this story
Delete

Video of my SaaStr 2023 Presentation: The Strategic Use and Abuse of SaaS Metrics

1 Comment

You're currently a free subscriber. Upgrade your subscription to get access to the rest of this post and other paid-subscriber only content.

The post Video of my SaaStr 2023 Presentation: The Strategic Use and Abuse of SaaS Metrics appeared first on Kellblog.

Read the whole story
jonwreed
7 days ago
reply
this should be interesting
planes, trains and automobiles
Share this story
Delete

Appearance on Data Radicals: Frameworks and the Art of Simplification

1 Comment

This is a quick post to highlight my recent appearance on the Data Radicals podcast (Apple, Spotify), hosted by Alation founder and CEO, Satyen Sangani. I’ve worked with Alation for a long time in varied capacities — e.g., as an angel investor, advisor, director, interim executive, skit writer, and probably a few other ways I can’t remember. This is a company I know well. They’re in a space I’m passionate about — and one that I might argue is a logical second generation of the semantic-layer-based BI market where I spent nearly ten years as CMO of Business Objects.

Satyen is a founder for whom I have a ton of respect, not only because of what he’s created, but because of the emphasis on culture and values reflected in how did it. Satyen also appreciates a good intellectual sparring match when making big decisions — something many founders pretend to enjoy, few actually do, and fewer still seek out.

This is an episode like no other I’ve done because of that history and because of the selection of topics that Satyen chose to cover as a result. This is not your standard Kellblog “do CAC on a cash basis,” “use pipeline expected value as a triangulation forecast,” or “align marketing with sales” podcast episode. Make no mistake, I love those too — but this is just noteably different content from most of my other appearances.

Here, we talk about:

  • The history and evolution of the database and tools market
  • The modern data stack
  • Intelligent operational applications vs. analytic applications
  • Why I feel that data can often end up an abstraction contest (and what to do about that)
  • Why I think in confusing makets that the best mapmaker wins
  • Who benefits from confusion in markets — and who doesn’t
  • Frameworks, simplification, and reductionism
  • Strategy and distilling the essence of a problem
  • Layering marketing messaging using ternary trees
  • The people who most influenced my thinking and career
  • The evolution of the data intelligence category and its roots in data governance and data catalogs
  • How tech markets are like boxing matches — you win a round and your prize is to earn the chance to fight in the next one
  • Data culture as an ultimate benefit and data intelligence as a software category

I hope you can listen to the episode, also available on Apple podcasts and Spotify. Thanks to Satyen for having me and I wish Alation continuing fair winds and following seas.

The post Appearance on Data Radicals: Frameworks and the Art of Simplification appeared first on Kellblog.

Read the whole story
jonwreed
44 days ago
reply
good context here for AI pursuits re: data foundations
planes, trains and automobiles
Share this story
Delete

Video of the Balderton SaaS Metrics That Matter Webinar

1 Comment

Just a quick post to share the recording of the webinar we did yesterday, where my Balderton Capital colleague, Michael Lavner, and I discussed the SaaS Metrics That Matter.  You can find the slides here.

The video is available here.

Thanks to everyone who attended and for the great questions that kept it interactive.

 

The post Video of the Balderton SaaS Metrics That Matter Webinar appeared first on Kellblog.

Read the whole story
jonwreed
92 days ago
reply
no one knows this aspect of SaaS startups better than Dave...
planes, trains and automobiles
Share this story
Delete

The Dangers of Stochastic Parrots Like ChatGPT w/ Emily M. Bender

1 Comment

Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology.
 
 Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master’s Program. She’s also the director of the Computational Linguistics Laboratory. Follow Emily on Twitter at @emilymbender or on Mastodon at @emilymbender@dair-community.social.

Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.

The podcast is produced by Eric Wickham and part of the Harbinger Media Network.
 
Also mentioned in this episode:

Support the show



Download audio: https://www.buzzsprout.com/1004689/12640803-the-dangers-of-stochastic-parrots-like-chatgpt-w-emily-m-bender.mp3
Read the whole story
jonwreed
154 days ago
reply
stochastic parrots indeed
planes, trains and automobiles
Share this story
Delete

Haptics, Hallucinations, Retrieval-Augmentation and a multi-model LLM future

1 Comment

We all know what the industry’s main character is right now – ChatGPT. But natural language processing (NLP) is in many respects a project as old as tech itself. A ton of companies are working on this stuff, some even before the current round of hype, with the attendant Great Pivot from Web3 to LLM. One such company is deepset.

Founded in 2018 in Berlin by Milos Rusic, Malte Pietsch, and Timo Möller, deepset maintains the open source haystack project, which is designed to make it easier to use Transformers and large language models (LLMs) in your applications. Transformers are a concept introduced by Google in 2017 in the seminal paper Attention Is All You Need – a neural network architecture that has dramatically accelerated the state of the art in AI/ML. deepset wants to make this kind of technology usable and useful by the enterprise, with both on prem and cloud products. Because for all the excitement about LLMs and related technologies there is also a lot of fear, uncertainty and doubt. Who owns the models likely owns the moats. Enterprises and governments are concerned about ownership and business sustainability. Samsung recently had a leak of source code and trade secrets after engineering teams used ChatGPT in a planning meeting. ChatGPT has been banned in Italy because of piracy concerns. So much for data protection – It’s not clear whether the type of crawling and learning approaches pioneered by OpenAI are even compatible with EU law, in the shape of the the General Data Protection Regulation (GDPR). Ant Stanley covers a lot of this in a great post on his new blog, with this post Ask for forgiveness, Not permission

Anyway, when an area is so hot it’s always interesting to talk to folks that are steeped in it. I was lucky enough to catch up with Pietsch recently, for a RedMonk Conversation video. It was funny that we both have stories about moms using ChatGPT. While I am not a fan of the “even my mom can do it” framing, it’s definitely worth paying attention when a technology is crossing over so fast to mainstream adoption. Conversational AI based on LLMs is “haptic” – the feedback loops are just very immediate. Insert Mythic Quest reference here.

Mainstream adoption creates all kinds of challenges for the kind of innovation unleashed by OpenAI and ChatGPT. That’s where data and model sovereignty, compliance, the avoidance of AI-driven hallucinations in content, code and decision-making comes in. Those are the kinds of areas where deepset is focusing its attention. What multicloud was to the last 10 years, multi-model probably will be to the next ten. We’ve already seen AWS start positioning itself accordingly/. Multi sounds good when you’re not the market leader.

OpenAI will be a winner, but not the only one.

A concept you’ll be hearing a lot more about is Retrieval Augmentation – in terms of improving models. Again we cover that in the conversation. So dive in!

So watch the video, and tell me what you think, here or on Youtube, but in the meantime I will leave you with a story from deepset about a gentleman in his 80s that runs a legal publishing firm in Germany. He called deepset just before Christmas last year to insist on a meeting before the end of the year to discuss ChatGPT’s potential implications on his business, and how he could do something similar but without giving his own information away. ChatGPT only launched on November 30th 2022. That’s the scale of the challenge, and the opportunity.

disclosure: AWS, Google and Microsoft are all clients. deepset sponsored this video.

Read the whole story
jonwreed
155 days ago
reply
could be worth a look
planes, trains and automobiles
Share this story
Delete
Next Page of Stories