Now that my work related to quantum computing has calmed down a bit, I've finally gotten around to experimenting with GPT-4's Code Interpreter. ChatGPT will execute Python for me.
https://chat.openai.com/?model=gpt-4-code-interpreter
You'll need to enable the Code Interpreter from the Beta Features in the Settings menu on the left (as of July 2023). After that, simply select the Code Interpreter from GPT-4 and execute. It's quite a shocking feature for those who use Python, isn't it?
In the case of ChatGPT, there's no execution environment, so you can only check proposals, not actual results. However, it does run code, including errors, and displays the computed results. I'll start using it right away. All you need to do is specify a program that seems runnable in Python and execute it."
Could you prepare about 50 points with random XY coordinates and display them in a scatter plot?
After a little while, it will execute the task and display the computed results for you.
It even added a title for me. From the 'Show Work' dropdown above the computed results, you can obtain the executed Python code.
import matplotlib.pyplot as plt
import numpy as np
Randomly generate 50 points for X and Y
np.random.seed(0)
X = np.random.rand(50)
Y = np.random.rand(50)
plt.figure(figsize=(8,6))
plt.scatter(X, Y, color='blue')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Scatter plot of 50 random points')
plt.grid(True)
plt.show()
Let's try classifying the points mentioned above.
Please classify the above points into two groups using the k-means method.
I have a feeling there might be many people whose jobs could be replaced by ChatGPT...
from sklearn.cluster import KMeans
Prepare the data
data = np.array(list(zip(X, Y)))
Apply k-means clustering
kmeans = KMeans(n_clusters=2, random_state=0).fit(data)
Get the cluster labels and centroids
labels = kmeans.labels_
centroids = kmeans.cluster_centers_
Plot the points with different colors for each cluster
plt.figure(figsize=(8,6))
plt.scatter(data[:, 0], data[:, 1], c=labels, cmap='viridis')
Plot the centroids
plt.scatter(centroids[:, 0], centroids[:, 1], color='red', marker='X', s=300)
plt.xlabel('X')
plt.ylabel('Y')
plt.title('K-means clustering with 2 clusters')
plt.grid(True)
plt.show()
Since the data has been labeled, I wanted to use this for prediction, so I gave it a try.
Using the two labels from the classification above, could you perform linear regression for linear classification?
from sklearn.linear_model import LogisticRegression
Apply logistic regression for linear classification
log_reg = LogisticRegression(random_state=0).fit(data, labels)
Create a mesh grid for plotting decision boundary
h = .02 # step size in the mesh
x_min, x_max = X.min() - .5, X.max() + .5
y_min, y_max = Y.min() - .5, Y.max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Predict the labels for each point on the grid
Z = log_reg.predict(np.c_[xx.ravel(), yy.ravel()])
Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8,6))
plt.pcolormesh(xx, yy, Z, cmap='viridis', alpha=0.2)
Plot the original points
plt.scatter(data[:, 0], data[:, 1], c=labels, edgecolors='k', cmap='viridis')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Linear classification using logistic regression')
plt.grid(True)
plt.show()
It's amazing. The only image that comes to mind is that of a lot of people becoming unemployed.