Intechnica

Explained: Machine Learning Code Generation (Emerging Tech Series: Part 2)

Key takeaways:

  • Gartner and others predict that machine learning and AI-powered development and testing will be much more popular by 2025
  • Tools such as ChatGPT and Github Copilot, already exist and can provide useful support in some scenarios
  • Businesses must be mindful of the limitation of the current generation of tools, such as their inability to provide sources for their knowledge, and inability to provide a level of confidence in their answers
  • Generative AI models are unlikely to replace humans in software development, but can significantly reduce trivial, repetitive and junior tasks

Lots of people, from senior executives to ordinary consumers, are talking about generative artificial intelligence at the moment, with ChatGPT and GitHub CoPilot very much in the public spotlight. It’s not often that such tools capture mainstream attention.

Gartner defines Generative AI as AI that “learns from existing content artefacts to generate new, realistic artefacts that reflect the characteristics of the training data, but do not repeat it.” Alongside this definition, Gartner are also making bold claims:

By 2025 30% of enterprises will have implemented an AI-Augmented development and testing strategy, up from 5% in 2021.

Gartner, 2022

This technology could have long-term impacts on the technology and software industry as a whole.

Excitement about this technology is everywhere. It took Facebook ten months to reach one million users. ChatGPT achieved that feat in five days.

Both ChatGPT and Github are tools based upon perhaps the world’s most famous Generative Artificial Intelligence (AI), GPT3.

What is GPT?

GPT was created by OpenAI, a company founded by Sam Alton, Elon Musk and other investors in December 2015. OpenAI have created a number of AI products, but the most impactful are in the Generative AI space. The company made headlines in 2018 with the launch of AI system GPT, its “Generative Pre-trained Transformer”.

This technology could perform reading and comprehension tasks in a similar manner to a human being.

A diagram to explain how ChatGPT learns (openai.com)

To do this, it uses a ‘deep neural network’ approach to machine learning. This means it applies a mathematical model that estimates which word in a sequence is the most suitable to use.

As a result, it can understand the context and relationship between words. Users can give GPT and its derivatives a prompt as written text, and GPT can generate a response based on its understanding. Its responses are almost indistinguishable from human-written text.

Users can give GPT and its derivatives a prompt as written text, and GPT can generate a response based on its understanding. Its responses are almost indistinguishable from human-written text.

Similar technology that powers chatbots has existed for a while. Most chatbots are programmed for specific scenarios, such as booking a hotel or making an insurance claim. This means they fail when asked questions that don’t match the exact criteria they were built for.


OpenAI’s GPT represents a significant leap forward.

ChatGPT can summarise text, solve riddles and even create programming code.


The AI is pre-trained on a massive amount of text from the internet, which allows it to generate more accurate responses when given a prompt. The latest iteration, GPT-3 can respond to very specific instructions about almost anything. It can summarise text, solve riddles and even create programming code.

In this example, ChatGPT replies to a text-based prompt with javascript. (openai.com)

What can ChatGPT do?

OpenAI unveiled ChatGPT in November 2022. It uses a slightly smaller variant of the GPT-3 language model and has a conversational interface. This makes it very easy for non-technical people to access it, which has in part led to some of its success and hype.

Most importantly, ChatGPT has been configured to answer questions and provide information: more specific direction than its predecessors.

ChatGPT explains how it can write programming code. You can see how it can base its response on previous answers. (openai.com)

By visiting ChatGPT’s website, anybody can ask it questions and receive a response. By design, it is extraordinarily capable of presenting material that looks very credible.

For example, if requested to do so, ChatGPT will write a blog post. Its ability to create readable, convincing, authoritative-sounding output is easily ChatGPT’s biggest strength.

It is perhaps also its greatest weakness.

Understanding ChatGPT’s limitations

The original version of this blog was written using ChatGPT. However, we chose not to publish the output as ChatGPT wouldn’t reveal the sources of its information.

Its lack of attribution in its answers is perhaps GPT3’s, and by extension ChatGPT’s, biggest challenge in widespread adoption and professional use.

Its lack of attribution in its answers is perhaps GPT3’s, and by extension ChatGPT’s, biggest challenge in widespread adoption and professional use.

This is easy to demonstrate. Ask ChatGPT an easily verifiable question, and it may well give the correct answer. But ask it for a source, and things fall down.

This example shows ChatGPT giving a correct answer, but refusing to provide a source for its ‘knowledge’. (openai.com)

This is less of an issue when ChatGPT provides the correct answer, but it can be very confidently wrong in some quite humorous ways.

Here, one can see ChatGPT is unable to ‘count’ to provide an accurate answer. (openai.com)

ChatGPT provides incorrect answers with the same authoritative, believable tone as correct ones. The same is true for the programming code it writes. This means ChatGPT still requires knowledgeable subject matter experts to validate its output.

By being so confidently wrong, ChatGPT still requires knowledgeable subject matter experts to validate its output

The limitations of Generative AI for code generation

There is already very public controversy surrounding generative artificial intelligence with GitHub CoPilot. GitHub Copilot is a tool that uses the same underlying technology as ChatGPT (GPT3) to assist and create code for, software developers.


Developers of GitHub Copilot trained it how to program by giving it millions of examples of human-written programming code (GitHub source code repositories) as its reference material. In theory, GPT-based tools will never re-produce, like for like, the material which it was trained upon. But did the people who wrote the code that Copilot ‘learned from’ give their permission?

There are many questions around copyright and even some legal challenges.

There are many questions around Github Copilot’s approach copyright, and even some legal challenges. On initial investigation, FOSSA (open source risk management experts) published an article stating that it did not believe GitHub were committing copyright infringement. However, the lawsuit is ongoing and should be considered before teams embrace Github Copilot extensively to create core IP.

Will Machine Learning Generated Code take over?

Perhaps speculatively, Gartner believe “By 2026, Generative AI will create 50% of new website and mobile app code using machine learning models”.


Generative AI Tools such as GitHub CoPilot are pioneering state-of-the-art technology. Perhaps they even represent a generational leap forward. And product and software development teams and processes are already being impacted.


For most developers, a large part of their responsibility can involve creation of boilerplate code (simple, repetitive code that does not contain core business logic). Generative AI has the potential to accelerate the creation of that code, allowing developers to focus on business requirements and more difficult problems.

Generative AI has the potential to accelerate the creation of code, allowing developers to focus on the business requirements and more difficult problems.


For product teams, the ability to reduce the time take for boilerplate tasks (writing customer responses based on support tickets, for example) allows for increased efforts on more value-creating activities for the business.

Another potential area that Generative AI can fuel is more realistic (and capable) chatbots on the web. For example, we see the potential for businesses to reduce spend on front-line service desks and initial customer contacts by using Generate AI as the first line of interaction.


With regard to chatbots, we may well look back on the ease of which non-technical people have been able to interact with conversational Generative AI, such as ChatGPT, as a watershed moment.

We may well look back on the ease of which non-technical people have been able to interact with ChatGPT as a watershed moment.


Tools like ChatGPT and GitHub Copilot are remarkable, but we hold that it is unlikely that Generative AI will ever replace real people. When amplifying or augmenting existing skillsets, Generative AI could be enormously powerful.

What leaders need to know about Generative AI

As bleeding-edge technology, Generative AI is quite literally unprecedented. Courts will need to decide how to govern them, and until this happens, widespread adoption will be limited.

However, forward-looking product and development teams should begin to look at how they can integrate Generative AI technologies into their products. The technology is still emerging, but we recommend individual investigation into how it can support software development lifecycles.

Teams should educate themselves in the technologies, and trail the technologies or approaches to see if they can be beneficial. This might include using AI to generate unit test coverage or boilerplate code.

More from the Emerging Tech Series:


Thanks to Mark Hastry (Technical Architect), David Bamber (Chief Technical Architect) and Jorge Migueis (Technical Consultant) for their work towards this blog.

Also in Intechnica

Connect

Get in touch with our technology experts.

Talk To Our Team