GitHub just unveiled the widespread release of Copilot Enterprise, a specialized edition of its coding assistance tool and developer-focused chatbot, priced at $39 per month for major corporations.
This version builds on the Business plan’s offerings, including intellectual property indemnity, and adds essential functionalities tailored for extensive teams.
A standout feature is the capability to access an organization’s private code and knowledge repositories. Additionally, Copilot now incorporates integration with Microsoft’s Bing search engine (presently in beta), and in the near future, it will allow users to customize Copilot’s algorithms using their team’s codebase.
This feature allows team newcomers to query Copilot about specific tasks, such as deploying a container image to the cloud, and receive instructions customized to their organization’s established procedures.
It’s widely recognized that for many developers, the major hurdle in adapting to a new company is not so much learning the codebase but mastering the unique operational processes. However, Copilot also offers valuable assistance in navigating the code itself, enhancing overall productivity.
The Bing integration, however, is only available to Enterprise subscribers and it remains unclear whether it will become available to other tiers of Copilot in the future.
A feature that will remain exclusive to Enterprise is fine-tuning, due to its associated costs. This lets companies pick a set of repositories in their GitHub organization and then fine-tune the model on those repositories.
The GitHub Copilot currently runs on OpenAI’s GPT 3.5 Turbo and was never able to shit toward GPT 4 due to its latency requirements. However, the GitHub team has noted that the Copilot model has been updated “more than half a dozen times” since the launch of Copilot Business.
As of now, it doesn’t seem like Copilot on GitHub is going to have different pricing tiers for its service like Google depending on the size of the model being used. GitHub CEO Thomas Dohmke said: “Different use cases require different models. Different optimizations — latency, accuracy, quality of the outcome, responsible AI — for each model version play a big role in making sure that the output is ethical, compliant, and secure and doesn’t generate a lower-quality code than what our customers expect. We will continue going down that path of using the best models for the different pieces of the Copilot experience.”