Python’s New Features in 3.12

Use examples when they help make things clearer.

In Python 3.12, there are several exciting new features that can improve your code’s performance and functionality. One of these is the introduction of type parameter syntax (PEP 695), which allows you to specify generic types for functions or classes using angle brackets `<>`. This feature can be particularly useful when working with complex data structures like lists, dictionaries, or sets that contain elements of different types.
For example, let’s say we have a function that takes a list as input and returns the sum of all its elements:

# Define a function called calculate_sum that takes in a list as input
def calculate_sum(lst):
    # Initialize a variable called total and set it to 0
    total = 0
    # Create a for loop that iterates through each element in the list
    for num in lst:
        # Add the current element to the total variable
        total += num
    # Return the final total
    return total

This works fine if our list contains only integers, but what if we want to work with a list of floats or strings? We can modify the function to accept any type using type hints (PEP 483), like this:

# This function calculates the sum of a list of numbers
def calculate_sum(lst: list) -> float: # Added type hint for list and return type of float
    total = 0 # Initialize total to 0
    for num in lst: # Loop through each element in the list
        total += num # Add the current element to the total
    return total # Return the final total

# Example usage
int_list = [1, 2, 3, 4, 5] # Create a list of integers
float_list = [1.5, 2.5, 3.5, 4.5, 5.5] # Create a list of floats
str_list = ["Hello", "World"] # Create a list of strings

print(calculate_sum(int_list)) # Output: 15
print(calculate_sum(float_list)) # Output: 17.5
print(calculate_sum(str_list)) # Output: TypeError: unsupported operand type(s) for +=: 'int' and 'str'
# The function only works with lists of numbers, not strings

However, if we want to enforce the input type at runtime and raise a TypeError if it’s not what we expect, we can use type parameter syntax (PEP 695) like this:

# Importing the necessary modules
from typing import List, TypeVar

# Creating a type variable 'T'
T = TypeVar('T')

# Defining a function 'calculate_sum' that takes in a list of type T and returns an integer
def calculate_sum(lst: List[T]) -> int:
    # Initializing a variable 'total' to store the sum
    total = 0
    # Looping through each element in the list
    for num in lst:
        # Checking if the element is of type T
        if not isinstance(num, T):
            # If not, raising a TypeError with a message
            raise TypeError("Input list contains non-" + str(T))
        # If it is of type T, adding it to the total
        total += num
    # Returning the total sum
    return total

In this example, we use type parameter syntax to define a generic type `T`, which can be any type that satisfies the input requirements. We then specify that our function’s argument (lst) and its output (int) are also of type `List[T]` and `int`, respectively. This allows us to enforce strict typing at runtime, making it easier to catch errors early on in development.

Another exciting new feature in Python 3.12 is the introduction of low impact monitoring for CPython (PEP 669), which provides a lightweight and non-intrusive way to monitor your application’s performance without affecting its execution time or memory usage. This can be particularly useful when working with large datasets, long running scripts, or resource intensive applications that require fine-grained monitoring for debugging purposes.
For example, let’s say we have a script that processes a large CSV file and outputs the results to another file:

# Import the necessary libraries
import csv # Import the csv library to read and write CSV files
from datetime import datetime # Import the datetime library to work with dates and times

# Define a function to process the CSV file
def process_csv(input_file, output_file):
    with open(input_file) as f: # Open the input file in read mode
        reader = csv.reader(f) # Create a reader object to read the CSV file
        writer = csv.writer(open(output_file, 'w')) # Create a writer object to write to the output file
        for row in reader: # Loop through each row in the CSV file
            # do some processing here...
            # Add code to process the data in each row
            ...
            writer.writerow([processed data]) # Write the processed data to the output file
    print("Done!") # Print a message to indicate that the processing is complete

# Call the function and pass in the input and output file names
process_csv("input.csv", "output.csv")

To monitor the execution time and memory usage of this script using low impact monitoring (PEP 669), we can add a few lines to our code like this:

# Import necessary libraries
import csv # Import the csv library to read and write csv files
from datetime import datetime # Import the datetime library to track execution time
from contextlib import redirect_stdout, redirect_stderr # Import the contextlib library to redirect output and error messages
from io import StringIO # Import the StringIO library to create a virtual file for monitoring logs
from typing import List, TypeVar # Import the typing library for type annotations

T = TypeVar('T') # Create a type variable T for generic type annotations

def process_csv(input_file, output_file):
    with open(input_file) as f: # Open the input file
        reader = csv.reader(f) # Create a csv reader object
        writer = csv.writer(open(output_file, 'w')) # Create a csv writer object to write to the output file
        for row in reader: # Loop through each row in the input file
            # do some processing here...
            ...
            writer.writerow([processed data]) # Write the processed data to the output file
    print("Done!") # Print a message to indicate the processing is complete
    
    with open('monitoring.log', mode='a') as f: # Open the monitoring log file in append mode
        start = datetime.now() # Record the start time of the execution
        end = None # Initialize the end time to None
        
        def monitor(): # Define a function to monitor execution time and memory usage
            nonlocal end # Use the nonlocal keyword to access the end variable defined outside the function
            end = datetime.now() # Record the end time of the execution
            duration = (end - start).total_seconds() # Calculate the duration of the execution in seconds
            memory_usage = psutil.virtual_memory().percent # Get the percentage of memory usage
            print(f"Execution time: {duration:.2f} seconds", file=f) # Print the execution time to the monitoring log file
            print(f"Memory usage: {memory_usage:.1f}%", file=f) # Print the memory usage to the monitoring log file
            
        with redirect_stdout(StringIO()), redirect_stderr(StringIO()): # Use the contextlib library to redirect output and error messages to a virtual file
            monitor() # Call the monitor function to track execution time and memory usage

In this example, we use low impact monitoring (PEP 669) to log the execution time and memory usage of our script at runtime. We first define a `monitor` function that prints the current time and memory usage using psutil library. We then wrap it inside a context manager (with redirect_stdout and redirect_stderr) to avoid printing any output to the console or stderr, which can affect performance. Finally, we call this function at the end of our script to log the execution time and memory usage after all processing is complete.

You can install it using pip like this: `pip install psutil`.

SICORPS