More on this book
Community
Kindle Notes & Highlights
Started reading
May 16, 2024
Inheritance of interface creates a subtype, implying an “is-a” relationship. This is best done with ABCs.
Inheritance of implementation avoids code duplication by reuse. Mixins can help with this.
Inheritance for code reuse is an implementation detail, and it can often be replaced by composition and delegation. On the other hand, interface inheritance is the backbone of a framework. Interface inheritance should use only ABCs as base classes, if possible.
In modern Python, if a class is intended to define an interface, it should be an explicit ABC or a typing.Protocol subclass. An ABC should subclass only abc.ABC or other ABCs. Multiple inheritance of ABCs is not problematic.
If a class is designed to provide method implementations for reuse by multiple unrelated subclasses, without implying an “is-a” relationship, it should be an explicit mixin class. Conceptually, a mixin does not define a new type; it merely bundles methods for reuse. A mixin should never be instantiated, and concrete classes should not inherit only from a mixin. Each mixin should provide a single specific behavior, implementing few and very closely related methods. Mixins should avoid keeping any internal state; i.e., a mixin class should not have instance attributes.
A class that is constructed primarily by inheriting from mixins and does not add its own structure or behavior is called an aggregate class.
Subclassing any complex class and overriding its methods is error-prone because the superclass methods may ignore the subclass overrides in unexpected ways. As much as possible, avoid overriding methods, or at least restrain yourself to subclassing classes which are designed to be easily extended, and only in the ways in which they were designed to be extended.
The PEP introduces a @final decorator that can be applied to classes or individual methods, so that IDEs or type checkers can report misguided attempts to subclass those classes or override those methods.
Subclassing concrete classes is more dangerous than subclassing ABCs and mixins, because instances of concrete classes usually have internal state that can easily be corrupted when you override methods that depend on that state.
“all non-leaf classes should be abstract.” In other words, Meyer recommends that only abstract classes should be subclassed.
If you must use subclassing for code reuse, then the code intended for reuse should be in mixin methods of ABCs or in explicitly named mixin classes.
One of the most successful languages created in the 21st century is Go. It doesn’t have a construct called “class,” but you can build types that are structs of encapsulated fields and you can attach methods to those structs. Go allows the definition of interfaces that are checked by the compiler using structural typing, a.k.a. static duck typing—very similar to what we now have with protocol types since Python 3.8. Go has special syntax for building types and interfaces by composition, but it does not support inheritance—not even among interfaces.
So perhaps the best advice about inheritance is: avoid it if you can. But often, we don’t have a choice: the frameworks we use impose their own design choices.
When it comes to reading clarity, properly-done composition is superior to inheritance. Since code is much more often read than written, avoid subclassing in general, but especially don’t mix the various types of inheritance, and don’t use subclassing for code sharing.
[We] started to push on the inheritance idea as a way to let novices build on frameworks that could only be designed by experts.
I learned a painful lesson that for small programs, dynamic typing is great. For large programs you need a more disciplined approach. And it helps if the language gives you that discipline rather than telling you “Well, you can do whatever you want”.
Aiming for 100% of annotated code may lead to type hints that add lots of noise but little value. Refactoring to simplify type hinting can lead to cumbersome APIs. Sometimes it’s better to be pragmatic and leave a piece of code without type hints.
TypedDict provides two things: Class-like syntax to annotate a dict with type hints for the value of each “field.” A constructor that tells the type checker to expect a dict with the keys and values as specified.
Static type checking is unable to prevent errors with code that is inherently dynamic, such as json.loads(), which builds Python objects of different types at runtime,
when handling data with a dynamic structure, such as JSON or XML, TypedDict is absolutely not a replacement for data validation at runtime. For that, use pydantic.
The typing.cast() special function provides one way to handle type checking malfunctions or incorrect type hints in code we can’t fix.
Casts are used to silence spurious type checker warnings and give the type checker a little help when it can’t quite understand what is going on.
Don’t get too comfortable using cast to silence Mypy, because Mypy is usually right when it reports an error. If you are using cast very often, that’s a code smell. Your team may be misusing type hints, or you may have low-quality dependencies in your codebase.
At import time, Python reads the type hints in functions, classes, and modules, and stores them in attributes named __annotations__.
The increased use of type hints raised two problems: Importing modules uses more CPU and memory when many type hints are used. Referring to types not yet defined requires using strings instead of actual types.
Companies using Python at a very large scale want the benefits of static typing, but they don’t want to pay the price for the evaluation of the type hints at import time. Static checking happens at developer workstations and dedicated CI servers, but loading modules happens at a much higher frequency and volume in the production containers, and this cost is not negligible at scale.
Avoid reading __annotations__ directly; instead, use inspect.get_annotations (from Python 3.10) or typing.get_type_hints (since Python 3.5).
Write a custom function of your own as a thin wrapper around inspect.get_annotations or typing.get_type_hints, and have the rest of your codebase call that custom function, so that future changes are localized to a single function.
BeverageDispenser(Generic[T]) is invariant when BeverageDispenser[OrangeJuice] is not compatible with BeverageDispenser[Juice]—despite the fact that OrangeJuice is a subtype-of Juice.
A generic type L is invariant when there is no supertype or subtype relationship between two parameterized types, regardless of the relationship that may exist between the actual parameters. In other words, if L is invariant, then L[A] is not a supertype or a subtype of L[B]. They are inconsistent in both ways.
If a formal type parameter defines a type for data that comes out of the object, it can be covariant.
If a formal type parameter defines a type for data that goes into the object after its initial construction, it can be contravariant.
If a formal type parameter defines a type for data that comes out of the object and the same parameter defines a type for data that goes ...
This highlight has been truncated due to consecutive passage length restrictions.
To err on the safe side, make formal type param...
This highlight has been truncated due to consecutive passage length restrictions.
Covariance or contravariance is not a property of a type variable, but a property of a generic class defined using this variable.
Operators that appear between operands, like 1 + rate, are infix operators.
It’s clear that infix operators make formulas more readable. Operator overloading is necessary to support infix operator notation with user-defined or extension types, such as NumPy arrays. Having operator overloading in a high-level, easy-to-use language was probably a key reason for the huge success of Python in data science, including financial and scientific applications.
Operator overloading allows user-defined objects to interoperate with infix operators such as + and |, or unary operators like - and ~.
We cannot change the meaning of the operators for the built-in types.
We cannot create new operators, only overload existing ones.
A few operators can’t be overloaded: is, and, or, not (but the bi...
This highlight has been truncated due to consecutive passage length restrictions.
general rule of operators: always return a new object. In other words, do not modify the receiver (self), but create and return a new instance of a suitable type.
For unary +, if the receiver is immutable you should return self; otherwise, return a copy of self.
Special methods implementing unary or infix operators should never change the value of the operands. Expressions with such operators are expected to produce results by creating new objects. Only augmented assignment operators may change the first operand (self),
If a has __add__, call a.__add__(b) and return result unless it’s NotImplemented.
If a doesn’t have __add__, or calling it returns NotImplemented, check if b has __radd__, then call b.__radd__(a) and return result unless it’s NotImplemented.
If b doesn’t have __radd__, or calling it returns NotImplemented, raise TypeError with an unsupp...
This highlight has been truncated due to consecutive passage length restrictions.
The __radd__ method is called the “reflected” or “reversed” version of __add__. I prefer to call them “reversed” special methods.
Do not confuse NotImplemented with NotImplementedError. The first, NotImplemented, is a special singleton value that an infix operator special method should return to tell the interpreter it cannot handle a given operand. In contrast, NotImplementedError is an exception that stub methods in abstract classes may raise to warn that subclasses must implement them.
if an operator special method cannot return a valid result because of type incompatibility, it should return NotImplemented and not raise TypeError. By returning NotImplemented, you leave the door open for the implementer of the other operand type to perform the operation when Python tries the reversed method call.