Michael Driscoll's Blog, page 5
May 29, 2024
Episode 42 – Harlequin – The SQL IDE for Your Terminal
This episode focuses on the Harlequin application, a Python SQL IDE for your terminal written using the amazing Textual package.
I was honored to have Ted Conbeer, the creator of Harlequin, on the show to discuss his creation and the other things he does with Python.
Specifically, we focused on the following topics:
Favorite Python packagesOrigins of HarlequinWhy program for the terminal versus a GUILessons learned in creating the toolAsyncioand more!LinksHarlequinTextualBBC article on asyncioThe post Episode 42 – Harlequin – The SQL IDE for Your Terminal appeared first on Mouse Vs Python.
May 23, 2024
Episode 41 – Python Packaging and FOSS with Armin Ronacher
In this episode, I chatted with Armin Ronacher about his many amazing Python packages, such as pygments, flask, Jinja, Rye, and Click!
Specifically, we talked about the following:
How Flask came aboutFavorite Python packagesPython packagingand much more!LinksSentryRyeFlaskpygmentsJinjaClickuvThe post Episode 41 – Python Packaging and FOSS with Armin Ronacher appeared first on Mouse Vs Python.
May 15, 2024
An Intro to Logging with Python and Loguru
Python’s logging module isn’t the only way to create logs. There are several third-party packages you can use, too. One of the most popular is Loguru. Loguru intends to remove all the boilerplate you get with the Python logging API.
You will find that Loguru greatly simplifies creating logs in Python.
This chapter has the following sections:
InstallationLogging made simpleHandlers and formattingCatching exceptionsTerminal logging with colorEasy log rotationLet’s find out how much easier Loguru makes logging in Python!
InstallationBefore you can start with Loguru, you will need to install it. After all, the Loguru package doesn’t come with Python.
Fortunately, installing Loguru is easy with pip. Open up your terminal and run the following command:
python -m pip install loguruPip will install Loguru and any dependencies it might have for you. You will have a working package installed if you see no errors.
Now let’s start logging!
Logging Made SimpleLogging with Loguru can be done in two lines of code. Loguru is really that simple!
Don’t believe it? Then open up your Python IDE or REPL and add the following code:
# hello.pyfrom loguru import logger
logger.debug("Hello from loguru!")
logger.info("Informed from loguru!")
One import is all you need. Then, you can immediately start logging! By default, the log will go to stdout.
Here’s what the output looks like in the terminal:
2024-05-07 14:34:28.663 | DEBUG | __main__::5 - Hello from loguru!2024-05-07 14:34:28.664 | INFO | __main__::6 - Informed from loguru!
Pretty neat! Now, let’s find out how to change the handler and add formatting to your output.
Handlers and FormattingLoguru doesn’t think of handlers the way the Python logging module does. Instead, you use the concept of sinks. The sink tells Loguru how to handle an incoming log message and write it somewhere.
Sinks can take lots of different forms:
A file-like object, such as sys.stderr or a file handleA file path as a string or pathlib.PathA callable, such as a simple functionAn asynchronous coroutine function that you define using async defA built-in logging.Handler. If you use these, the Loguru records convert to logging records automaticallyTo see how this works, create a new file called file_formatting.py in your Python IDE. Then add the following code:
# file_formatting.pyfrom loguru import logger
fmt = "{time} - {name} - {level} - {message}"
logger.add("formatted.log", format=fmt, level="INFO")
logger.debug("This is a debug message")
logger.info("This is an informational message")
If you want to change where the logs go, use the add() method. Note that this adds a new sink, which, in this case, is a file. The logger will still log to stdout, too, as that is the default, and you are adding to the handler list. If you want to remove the default sink, add logger.remove() before you call add().
When you call add(), you can pass in several different arguments:
sink – Where to send the log messageslevel – The logging levelformat – How to format the log messagesfilter – A logging filterThere are several more, but those are the ones you would use the most. If you want to know more about add(), you should check out the documentation.
You might have noticed that the formatting of the log records is a little different than what you saw in Python’s own logging module.
Here is a listing of the formatting directives you can use for Loguru:
elapsed – The time elapsed since the app startedexception – The formatted exception, if there was oneextra – The dict of attributes that the user boundfile – The name of the file where the logging call came fromfunction – The function where the logging call came fromlevel – The logging levelline – The line number in the source codemessage – The unformatted logged messagemodule – The module that the logging call was made fromname – The __name__ where the logging call came fromprocess – The process in which the logging call was madethread – The thread in which the logging call was madetime – The aware local time when the logging call was madeYou can also change the time formatting in the logs. In this case, you would use a subset of the formatting from the Pendulum package. For example, if you wanted to make the time exclude the date, you would use this: {time:HH:mm:ss} rather than simply {time}, which you see in the code example above.
See the documentation for details on formating time and messages.
When you run the code example, you will see something similar to the following in your log file:
2024-05-07T14:35:06.553342-0500 - __main__ - INFO - This is an informational messageYou will also see log messages sent to your terminal in the same format as you saw in the first code example.
Now, you’re ready to move on and learn about catching exceptions with Loguru.
Catching ExceptionsCatching exceptions with Loguru is done by using a decorator. You may remember that when you use Python’s own logging module, you use logger.exception in the except portion of a try/except statement to record the exception’s traceback to your log file.
When you use Loguru, you use the @logger.catch decorator on the function that contains code that may raise an exception.
Open up your Python IDE and create a new file named catching_exceptions.py. Then enter the following code:
# catching_exceptions.pyfrom loguru import logger
@logger.catch
def silly_function(x, y, z):
return 1 / (x + y + z)
def main():
fmt = "{time:HH:mm:ss} - {name} - {level} - {message}"
logger.add("exception.log", format=fmt, level="INFO")
logger.info("Application starting")
silly_function(0, 0, 0)
logger.info("Finished!")
if __name__ == "__main__":
main()
According to Loguru’s documentation, the’ @logger.catch` decorator will catch regular exceptions and also work with applications with multiple threads. Add another file handler on top of the stream handler and start logging for this example.
Then you call silly_function() with a bunch of zeroes, which causes a ZeroDivisionError exception.
Here’s the output from the terminal:
If you open up the exception.log, you will see that the contents are a little different because you formatted the timestamp and also because logging those funny lines that show what arguments were passed to the silly_function() don’t translate that well:
14:38:30 - __main__ - INFO - Application starting14:38:30 - __main__ - ERROR - An error has been caught in function 'main', process 'MainProcess' (8920), thread 'MainThread' (22316):
Traceback (most recent call last):
File "C:\books\11_loguru\catching_exceptions.py", line 17, in
main()
â””
> File "C:\books\11_loguru\catching_exceptions.py", line 13, in main
silly_function(0, 0, 0)
â””
File "C:\books\11_loguru\catching_exceptions.py", line 7, in silly_function
return 1 / (x + y + z)
│ │ └ 0
│ └ 0
â”” 0
ZeroDivisionError: division by zero
14:38:30 - __main__ - INFO - Finished!
On the whole, using the @logger.catch is a nice way to catch exceptions.
Now, you’re ready to move on and learn about changing the color of your logs in the terminal.
Terminal Logging with ColorLoguru will print out logs in color in the terminal by default if the terminal supports color. Colorful logs can make reading through the logs easier as you can highlight warnings and exceptions with unique colors.
You can use markup tags to add specific colors to any formatter string. You can also apply bold and underline to the tags.
Open up your Python IDE and create a new file called terminal_formatting.py. After saving the file, enter the following code into it:
# terminal_formatting.pyimport sys
from loguru import logger
fmt = ("{time} - "
"{name} - "
"{level} - {message}")
logger.add(sys.stdout, format=fmt, level="DEBUG")
logger.debug("This is a debug message")
logger.info("This is an informational message")
You create a special format that sets the “time” portion to red and the “name” to yellow. Then, you add() that format to the logger. You will now have two sinks: the default root handler, which logs to stderr, and the new sink, which logs to stdout. You do formatting to compare the default colors to your custom ones.
Go ahead and run the code. You should see something like this:
Neat! It would be best if you now spent a few moments studying the documentation and trying out some of the other colors. For example, you can use hex and RGB colors and a handful of named colors.
The last section you will look at is how to do log rotation with Loguru!
Easy Log RotationLoguru makes log rotation easy. You don’t need to import any special handlers. Instead, you only need to specify the rotation argument when you call add().
Here are a few examples:
logger.add("file.log", rotation="100 MB")logger.add("file.log", rotation="12:00")logger.add("file.log", rotation="1 week")These demonstrate that you can set the rotation at 100 megabytes at noon daily or even rotate weekly.
Open up your Python IDE so you can create a full-fledged example. Name the file log_rotation.py and add the following code:
# log_rotation.pyfrom loguru import logger
fmt = "{time} - {name} - {level} - {message}"
logger.add("rotated.log",
format=fmt,
level="DEBUG",
rotation="50 B")
logger.debug("This is a debug message")
logger.info("This is an informational message")
Here, you set up a log format, set the level to DEBUG, and set the rotation to every 50 bytes. When you run this code, you will get a couple of log files. Loguru will add a timestamp to the file’s name when it rotates the log.
What if you want to add compression? You don’t need to override the rotator like you did with Python’s logging module. Instead, you can turn on compression using the compression argument.
Create a new Python script called log_rotation_compression.py and add this code for a fully working example:
# log_rotation_compression.pyfrom loguru import logger
fmt = "{time} - {name} - {level} - {message}"
logger.add("compressed.log",
format=fmt,
level="DEBUG",
rotation="50 B",
compression="zip")
logger.debug("This is a debug message")
logger.info("This is an informational message")
for i in range(10):
logger.info(f"Log message {i}")
The new file is automatically compressed in the zip format when the log rotates. There is also a retention argument that you can use with add() to tell Loguru to clean the logs after so many days:
logger.add("file.log",rotation="100 MB",
retention="5 days")
If you were to add this code, the logs that were more than five days old would get cleaned up automatically by Loguru!
Wrapping UpThe Loguru package makes logging much easier than Python’s logging library. It removes the boilerplate needed to create and format logs.
In this chapter, you learned about the following:
InstallationLogging made simpleHandlers and formattingCatching exceptionsTerminal logging with colorEasy log rotationLoguru can do much more than what is covered here, though. You can serialize your logs to JSON or contextualize your logger messages. Loguru also allows you to add lazy evaluation to your logs to prevent them from affecting performance in production. Loguru also makes adding custom log levels very easy. For full details about all the things Loguru can do, you should consult Loguru’s website.
The post An Intro to Logging with Python and Loguru appeared first on Mouse Vs Python.
May 13, 2024
How to Annotate a Graph with Matplotlib and Python
The Matplotlib package is great for visualizing data. One of its many features is the ability to annotate points on your graph. You can use annotations to explain why a particular data point is significant or interesting.
If you haven’t used Matplotlib before, you should check out my introductory article, Matplotlib – An Intro to Creating Graphs with Python or read the official documentation.
Let’s get started!
Installing MatplotlibIf you don’t have Matplotlib on your computer, you must install it. Fortunately, you can use pip, the Python package manager utility that comes with Python.
Open up your terminal or command prompt and run the following command:
python -m pip install matplotlibPip will now install Matplotlib and any dependencies that Matplotlib needs to work properly. Assuming that Matplotlib installs successfully, you are good to go!
Annotating Points on a GraphMatplotlib comes with a handy annotate()method that you can use. As with most of Matplotlib’s methods, annotate()can take quite a few different parameters.
For this example, you will be using the following parameters:
text – The label for the annotationxy – The x/y coordinate of the point of interestarrowprops – A dictionary of arrow propertiesxytext – Where to place the text for the annotationNow that you know what you’re doing, open up your favorite Python IDE or text editor and create a new Python file. Then enter the following code:
import matplotlib.pylab as pltimport numpy as np
def annotated():
fig = plt.figure(figsize=(8, 6))
numbers = list(range(10))
plt.plot(numbers, np.exp(numbers))
plt.title("Annotating an Exponential Plot using plt.annotate()")
plt.xlabel("x-axis")
plt.ylabel("y-axis")
plt.annotate("Point 1", xy=(6, 400),
arrowprops=dict(arrowstyle="->"),
xytext=(4, 600))
plt.annotate("Point 2", xy=(7, 1150),
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3,rad=-.2"),
xytext=(4.5, 2000))
plt.annotate("Point 3", xy=(8, 3000),
arrowprops=dict(arrowstyle="->",
connectionstyle="angle,angleA=90,angleB=0"),
xytext=(8.5, 2200))
plt.show()
if __name__ == "__main__":
annotated()
Here, you are creating a simple line graph. You want to annotate three points on the graph. The arrowprops define the arrowstyleand, in the latter two points, the connectionstyle. These properties tell Matplotlib what type of arrow to use and whether it should be connected to the text as a straight line, an arc, or a 90-degree turn.
When you run this code, you will see the following graph:
You can see how the different points are located and how the arrowprops lines are changed. You should check out the full documentation to learn all the details about the arrows and annotations.
Wrapping UpAnnotating your graph is a great way to make your plots more informative. Matplotlib allows you to add many different labels to your plots, and annotating the interesting data points is quite nice.
You should spend some time experimenting with annotations and learning all the different parameters it takes to fully understand this useful feature.
The post How to Annotate a Graph with Matplotlib and Python appeared first on Mouse Vs Python.
May 11, 2024
Ruff – The Fastest Python Linter and Formatter Just Got Faster!
I’m a little late in reporting on this topic, but Ruff put out an update in April 2024 that includes a hand-written recursive descent parser. This update is in version 0.4.0 and newer.
Ruff’s new parser is >2x faster, translating to a 20-40% speedup for all linting and formatting invocations. Ruff’s announcement includes some statistics to show improvements that are worth checking out.
What’s This New Parser?I’ve never tried writing a code parser, so I’ll have to rely on Ruff’s announcement to explain this. Basically, when you are doing static analysis, you will turn the source code into Abstract Syntax Trees (ASTs), which you can then analyze. Python has an AST module built in for this purpose. Ruff is written in Rust, though, so their AST analyzer is also written in Rust.
The original parser was called a generated parser, specifically LALRPOP. The parser requires a grammar to be defined in a Domain Specific Language (DSL), which is then converted into executable code for the generator.
Ruff’s new hand-written parser is a recursive descent parser. Follow that link to Wikipedia to learn all the nitty gritty details.
Their team created a hand-written parser to give them more control and flexibility over the parsing process, making it easier to work on the many weird edge cases they need to support. They also created a new parser to make Ruff faster and provide better error messages and error resilience.
Wrapping UpRuff is great and makes linting and formatting your Python code so much faster. You can learn much more about Ruff in my other articles on this topic:
An Intro to Ruff – An Extremely Fast Python Linter
The Ruff Formatter – Python’s Fastest Formatter!
Episode 23 – The Ruff Formatter with Charlie Marsh
The post Ruff – The Fastest Python Linter and Formatter Just Got Faster! appeared first on Mouse Vs Python.
May 10, 2024
One Week Left for Python Logging Book / Course Kickstarter
My latest Python book campaign is ending in less than a week. This book is about Python’s logging module. I also include two chapters that discuss structlog and loguru.
Support on Kickstarter Why Back A Kickstarter?The reason to back the Kickstarter is that I have exclusive perks there that you cannot get outside of it. Here are some examples:
Signed paperback copy of the bookEarly access to the video course lessonsT-shirt with the cover artExclusive price for Teach Me Python, which includes ALL my self-published books and coursesExclusive price for all my self-published booksSupport on KickstarterWhat You’ll LearnIn this book, you will learn how about the following:
Logger objectsLog levelsLog handlersFormatting your logsLog configurationLogging decoratorsRotating logsLogging and concurrencyand more!Book formatsThe finished book will be made available in the following formats:
paperback (at the appropriate reward level)PDFepubThe paperback is a 6″ x 9″ book and is approximately 150 pages long.
Support on KickstarterThe post One Week Left for Python Logging Book / Course Kickstarter appeared first on Mouse Vs Python.
May 9, 2024
Episode 40 – Open Source Development with Antonio Cuni
In this episode, we discuss working on several different open-source Python packages. Antonio Cuni is our guest, and he chats about his work on PyScript, pdb++, pypy, HPy, and SPy.
Listen in as we chat about Python, packages, open source, and so much more!
Show LinksHere are some of the projects we talked about in the show:
The Invent FrameworkPyScriptpdb++ – A drop-in replacement for pdbpypy – The fast, compliant, alternative Python implementationHPy – A better C API for PythonSPy – Static PythonThe post Episode 40 – Open Source Development with Antonio Cuni appeared first on Mouse Vs Python.
May 6, 2024
How to Read and Write Parquet Files with Python
Apache Parquet files are a popular columnar storage format used by data scientists and anyone using the Hadoop ecosystem. It was developed to be very efficient in terms of compression and encoding. Check out their documentation if you want to know all the details about how Parquet files work.
You can read and write Parquet files with Python using the pyarrow package.
Let’s learn how that works now!
Installing pyarrowThe first step is to make sure you have everything you need. In addition to the Python programming language, you will also need pyarrow and the pandas package. You will use pandas because it is another Python package that uses columns as a data format and works well with Parquet files.
You can use pip to install both of these packages. Open up your terminal and run the following command:
python -m pip install pyarrow pandasIf you use Anaconda, you’ll want to install pyarrow using this command instead.
conda install -c conda-forge pyarrowAnaconda should already include pandas, but if not, you can use the same command above by replacing pyarrow with pandas.
Now that you have pyarrow and pandas installed, you can use it to read and write Parquet files!
Writing Parquet Files with PythonWriting Parquet files with Python is pretty straightforward. The code to turn a pandas DataFrame into a Parquet file is about ten lines.
Open up your favorite Python IDE or text editor and create a new file. You can name it something like parquet_file_writer.pyor use some other descriptive name. Then enter the following code:
import pandas as pdimport pyarrow as pa
import pyarrow.parquet as pq
def write_parquet(df: pd.DataFrame, filename: str) -> None:
table = pa.Table.from_pandas(df)
pq.write_table(table, filename)
if __name__ == "__main__":
data = {"Languages": ["Python", "Ruby", "C++"],
"Users": [10000, 5000, 8000],
"Dynamic": [True, True, False],
}
df = pd.DataFrame(data=data, index=list(range(1, 4)))
write_parquet(df, "languages.parquet")
For this example, you have three imports:
One for pandas, so you can create a DataFrameOne for pyarrow, to create a special pyarrow.Table objectOne for pyarrow.parquetto transform the table object into a Parquet fileThe write_parquet() function takes in a pandas DataFrame and the file name or path to save the Parquet file to. Then, you transform the DataFrame into a pyarrow Table object before converting that into a Parquet File using the write_table() method, which writes it to disk.
Now you are ready to read that file you just created!
Reading Parquet Files with PythonReading the Parquet file you created earlier with Python is even easier. You’ll need about half as many lines of code!
You can put the following code into a new file called something like parquet_file_reader.pyif you want to:
import pyarrow.parquet as pqdef read_parquet(filename: str) -> None:
table = pq.read_table(filename)
df = table.to_pandas()
print(df)
if __name__ == "__main__":
read_parquet("languages.parquet")
In this example, you read the Parquet file into a pyarrow Table format and then convert it to a pandas DataFrame using the Table’s to_pandas() method.
When you print out the contents of the DataFrame, you will see the following:
Languages Users Dynamic1 Python 10000 True
2 Ruby 5000 True
3 C++ 8000 False
You can see from the output above that the DataFrame contains all data you saved.
One of the strengths of using a Parquet file is that you can read just parts of the file instead of the whole thing. For example, you can read in just some of the columns rather then the whole file!
Here’s an example of how that works:
import pyarrow.parquet as pqdef read_columns(filename: str, columns: list[str]) -> None:
table = pq.read_table(filename, columns=columns)
print(table)
if __name__ == "__main__":
read_columns("languages.parquet", columns=["Languages", "Users"])
To read in just the “Languages” and “Users” columns from the Parquet file, you pass in the a list that contains just those column names. Then when you call read_table() you pass in the columns you want to read.
Here’s the output when you run this code:
pyarrow.TableLanguages: string
Users: int64
----
Languages: [["Python","Ruby","C++"]]
Users: [[10000,5000,8000]]
This outputs the pyarrow Table format, which differs slightly from a pandas DataFrame. It tells you information about the different columns; for example, Languages are strings, and Users are of type int64.
If you prefer to work only with pandas DataFrames, the pyarrow package allows that too. As long as you know the Parquet file contains pandas DataFrames, you can use read_pandas() instead of read_table().
Here’s a code example:
import pyarrow.parquet as pqdef read_columns_pandas(filename: str, columns: list[str]) -> None:
table = pq.read_pandas(filename, columns=columns)
df = table.to_pandas()
print(df)
if __name__ == "__main__":
read_columns_pandas("languages.parquet", columns=["Languages", "Users"])
When you run this example, the output is a DataFrame that contains just the columns you asked for:
Languages Users1 Python 10000
2 Ruby 5000
3 C++ 8000
One advantage of using the read_pandas() and to_pandas() methods is that they will maintain any additional index column data in the DataFrame, while the pyarrow Table may not.
Reading Parquet File MetadataYou can also get the metadata from a Parquet file using Python. Getting the metadata can be useful when you need to inspect an unfamiliar Parquet file to see what type(s) of data it contains.
Here’s a small code snippet that will read the Parquet file’s metadata and schema:
import pyarrow.parquet as pqdef read_metadata(filename: str) -> None:
parquet_file = pq.ParquetFile(filename)
metadata = parquet_file.metadata
print(metadata)
print(f"Parquet file: {filename} Schema")
print(parquet_file.schema)
if __name__ == "__main__":
read_metadata("languages.parquet")
There are two ways to get the Parquet file’s metadata:
Use pq.ParquetFile to read the file and then access the metadata propertyUse pr.read_metadata(filename) insteadThe benefit of the former method is that you can also access the schema property of the ParquetFile object.
When you run this code, you will see this output:
created_by: parquet-cpp-arrow version 15.0.2
num_columns: 4
num_rows: 3
num_row_groups: 1
format_version: 2.6
serialized_size: 2682
Parquet file: languages.parquet Schema
required group field_id=-1 schema {
optional binary field_id=-1 Languages (String);
optional int64 field_id=-1 Users;
optional boolean field_id=-1 Dynamic;
optional int64 field_id=-1 __index_level_0__;
}
Nice! You can read the output above to learn the number of rows and columns of data and the size of the data. The schema tells you what the field types are.
Wrapping UpParquet files are becoming more popular in big data and data science-related fields. Python’s pyarrow package makes working with Parquet files easy. You should spend some time experimenting with the code in this tutorial and using it for some of your own Parquet files.
When you want to learn more, check out the Parquet documentation.
The post How to Read and Write Parquet Files with Python appeared first on Mouse Vs Python.
May 2, 2024
The Python Show Podcast Ep 39 – Buttondown – A Python SaaS with Justin Duke
In this episode, we invite the founder of Buttondown, a Python-based Software as a Service (SaaS) application for creating and managing newsletters.
Mike Driscoll, the host of the show, chats with Justin about the following topics:
Why he created a SaaS with PythonFavorite Python packages or modulesPython web frameworksEntrepreneurshipAI and programmingand more!
The post The Python Show Podcast Ep 39 – Buttondown – A Python SaaS with Justin Duke appeared first on Mouse Vs Python.
April 30, 2024
How to Watermark a Graph with Matplotlib
Matplotlib is one of the most popular data visualization packages for the Python programming language. It allows you to create many different charts and graphs. This tutorial focuses on adding a “watermark” to your graph. If you need to learn the basics, you might want to check out Matplotlib—An Intro to Creating Graphs with Python.
Let’s get started!
Installing MatplotlibIf you don’t have Matplotlib on your computer, you must install it. Fortunately, you can use pip, the Python package manager utility that comes with Python.
Open up your terminal or command prompt and run the following command:
python -m pip install matplotlibPip will now install Matplotlib and any dependencies that Matplotlib needs to work properly. Assuming that Matplotlib installs successfully, you are good to go!
Watermarking Your GraphAdding a watermark to a graph is a fun way to learn how to use Matplotlib. For this example, you will create a simple bar chart and then add some text. The text will be added at an angle across the graph as a watermark.
Open up your favorite Python IDE or text editor and create a new Python file. Then add the following code:
import matplotlib.pyplot as pltdef bar_chart(numbers, labels, pos):
fig = plt.figure(figsize=(5, 8))
plt.bar(pos, numbers, color="red")
# add a watermark
fig.text(1, 0.15, "Mouse vs Python",
fontsize=45, color="blue",
ha="right", va="bottom", alpha=0.4,
rotation=25)
plt.xticks(ticks=pos, labels=labels)
plt.show()
if __name__ == "__main__":
numbers = [2, 1, 4, 6]
labels = ["Electric", "Solar", "Diesel", "Unleaded"]
pos = list(range(4))
bar_chart(numbers, labels, pos)
Your bar_chart() function takes in some numbers, labels and a list of positions for where the bars should be placed. You then create a figure to put your plot into. Then you create the bar chart using the list of bar positions and the numbers. You also tell the chart that you want the bars to be colored “red”.
The next step is to add a watermark. To do that, you call fig.text() which lets you add text on top of your plot. Here is a quick listing of the arguments that you need to pass in:
x, y (the first two arguments are the x/y coordinates for the text)fontsize – The size of the fontcolor – The color of the textha – Horizontal alignmentva – Vertical alignmentalpha – How transparent the text should berotation – How many degrees to rotate the textThe last bit of code in bar_chart() adds the ticks and labels to the bottom of the plot.
When you run this code, you will see something like this:
Isn’t that neat? You now have a simple plot, and you know how to add semi-transparent text to it, too!
Wrapping UpProper attribution is important in academics and business. Knowing how to add a watermark to your data visualization can help you do that. You now have that knowledge when using Matplotlib.
The Matplotlib package can do many other types of plots and provides much more customization than what it covered here. Check out its documentation to learn more!
The post How to Watermark a Graph with Matplotlib appeared first on Mouse Vs Python.