Upload of pre-existing files

This commit is contained in:
Marcello Lamonaca 2021-01-31 11:05:37 +01:00
commit 4c21152830
150 changed files with 730703 additions and 0 deletions

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,46 @@
# Jupyter Notebooks Cheat Sheet
## MAGIC COMMANDS
`%quickref` Display the IPython Quick Reference Card
`%magic` Display detailed documentation for all of the available magic commands
`%debug` Enter the interactive debugger at the bottom of the last exception traceback
`%hist` Print command input (and optionally output) history
`%pdb` Automatically enter debugger after any exception
`%paste` Execute pre-formatted Python code from clipboard
`%cpaste` Open a special prompt for manually pasting Python code to be executed
`%reset` Delete all variables / names defined in interactive namespace
`%page` OBJECT Pretty print the object and display it through a pager
`%run` script.py Run a Python script inside IPython
`%prun` statement Execute statement with cProfile and report the profiler output
`%time` statement Report the execution time of single statement
`%timeit` statement Run a statement multiple times to compute an emsemble average execution time. Useful for timing code with very short execution time
`%who`, `%who_ls`, `%whos` Display variables defined in interactive namespace, with varying levels of information / verbosity
`%xdel` variable Delete a variable and attempt to clear any references to the object in the IPython internals
## INTERACTING WITH THE OPERATING SYSTEM
`!cmd` Execute cmd in the system shell
`output = !cmd args` Run cmd and store the stdout in output
`%alias alias_name cmd` Define an alias for a system (shell) command
`%bookmark` Utilize IPythons directory bookmarking system
`%cd` directory Change system working directory to passed directory
`%pwd` Return the current system working directory
`%pushd` directory Place current directory on stack and change to target directory
`%popd` Change to directory popped off the top of the stack
`%dirs` Return a list containing the current directory stack
`%dhist` Print the history of visited directories
`%env` Return the system environment variables as a dict
Input variables are stored in variables named like `iX`, where `X` is the input line number
IPython is capable of logging the entire console session including input and output
Logging is turned on by typing `%logstart`
Starting a line in IPython with an exclamation point `!`, or bang, tells IPython to execute everything after the bang in the system shell
The console output of a shell command can be stored in a variable by assigning the !-escaped expression to a variable
## TIMING CODE
`%time` runs a statement once, reporting the total execution time
`%timeit` given an arbitrary statement, it has a heuristic to run a statement multiple times to produce a fairly accurate average runtime

75
Python/Libs/CLI/click.md Normal file
View file

@ -0,0 +1,75 @@
# [Click](https://click.palletsprojects.com) Lib
## Command Creation
```py
import click
# the decorator converts the function into a Command which then can be invoked
@click.command()
def hello():
click.echo('Hello World!')
if __name__ == '__main__':
hello()
```
### Nesting Commands
Commands can be attached to other commands of type `Group`. This allows arbitrary nesting of scripts. As an example here is a script that implements two commands for managing databases:
```py
@click.group()
def cli():
pass
@click.command()
def initdb():
click.echo('Initialized the database')
@click.command()
def dropdb():
click.echo('Dropped the database')
cli.add_command(initdb)
cli.add_command(dropdb)
```
The `group()` decorator works like the `command()` decorator, but creates a Group object instead which can be given multiple subcommands that can be attached with `Group.add_command()`.
For simple scripts, its also possible to automatically attach and create a command by using the `Group.command()` decorator instead.
The above script can instead be written like this:
```py
@click.group()
def cli():
pass
@cli.command()
def initdb():
click.echo('Initialized the database')
@cli.command()
def dropdb():
click.echo('Dropped the database')
```
You would then invoke the Group in your setuptools entry points or other invocations:
```py
if __name__ == '__main__':
cli()
```
### Adding Parameters
To add parameters, use the `option()` and `argument()` decorators:
```py
@click.command()
@click.option('--count', default=1, help='number of greetings')
@click.argument('name')
def hello(count, name):
for x in range(count):
click.echo(f'Hello {name}!')
```

View file

@ -0,0 +1 @@
# [SQL ALchemy](https://www.sqlalchemy.org/) Lib

579
Python/Libs/GUI/Tkinter.md Normal file
View file

@ -0,0 +1,579 @@
# Tkinter Module/Library Cheat Sheet
## Standard Imports
```py
from tkinter import * # import Python Tk Binding
from tkinter import ttk # import Themed Widgets
```
## GEOMETRY MANAGEMENT
Putting widgets on screen
master widget --> toplevel window, frame
slave widget --> widgets contained in master widget
geometry managers determine size and oder widget drawing properties
## EVENT HANDLING
event loop recives events from the OS
customizable events provide a callback as a widget configuration
```py
widget.bind('event', function) # method to capture any event and than execute an arbitrary piece of code (generally a functio or lambda)
```
VIRTUAL EVENT --> hig level event genearted by widget (listed in widget docs)
## WIDGETS
"idgets are objects and all things on screen. All widgets are children of a window.
```py
widget_name = tk_object(parent_window) # widget is insetred into widget hierarchy
```
## FRAME WIDGET
Displays a single rectangle, used as container for other widgets
```py
frame = ttk.Frame(parent, width=None, height=None, borderwidth=num:int)
# BORDERWIDTH: sets frame border width (default: 0)
# width, height MUST be specified if frame is empty, otherwise determinded by parent geometry manager
```
### FRAME PADDING
Extra space inside widget (margin).
```py
frame['padding'] = num # same padding for every border
frame['padding'] = (horizzontal, vertical) # set horizontal THEN vertical padding
frame['padding'] = (left, top, right, bottom) # set left, top, right, bottom padding
# RELIEF: set border style, [flat (default), raised, sunken, solid, ridge, groove]
frame['relief'] = border_style
```
## LABEL WIDGET
Display tetx or image without interactivity.
```py
label = ttk.Label(parent, text='label text')
```
### DEFINING UPDATING LABEL
```py
var = StringVar() # variable containing text, watchs for changes. Use get, set methods to interact with the value
label['textvariable'] = var # attach var to label (only of type StringVar)
var.set("new text label") # change label text
```
### DISPLAY IMAGES (2 steps)
```py
image = PhotoImage(file='filename') # create image object
label['image'] = image # use image config
```
### DISPLAY IMAGE AND-OR TEXT
```py
label['compound'] = value
```
Compound value:
- none (img if present, text otherwise)
- text (text only)
- image (image only)
- center (text in center of image)
- top (image above text), left, bottom, right
## LAYOUT
Specifies edge or corner that the label is attached.
```py
label['anchor'] = compass_direction #compass_direction: n, ne, e, se, s, sw, w, nw, center
```
### MULTI-LINE TEXT WRAP
```py
# use \n for multi line text
label['wraplength'] = size # max line lenght
```
### CONTROL TEXT JUSTIFICATION
```py
label['justify'] = value #value: left, center, right
label['relief'] = label_style
label['foreground'] = color # color pased with name or HEX RGB codes
label['background'] = color # color pased with name or HEX RGB codes
```
### FONT STYLE (use with caution)
```py
# used outside style option
label['font'] = font
```
Fonts:
- TkDefaultFont -- default for all GUI items
- TkTextFont -- used for entry widgets, listboxes, etc
- TkFixedFont -- standar fixed-width font
- TkMenuFont -- used for menu items
- TkHeadingFont -- for column headings in lists and tables
- TkCaptionFont -- for window and dialog caption bars
- TkSmallCaptionFont -- smaller caption for subwindows or tool dialogs
- TkIconFont -- for icon caption
- TkTooltipFont -- for tooltips
## BUTTON
Press to perform some action
```py
button = ttk.Button(parent, text='button_text', command=action_performed)
```
### TEXT or IMAGE
```py
button['text/textvariable'], button['image'], button['compound']
```
### BUTTON INVOCATION
```py
button.invoke() # button activation in the program
```
### BUTTON STATE
Activate (interagible) or deactivate (not interagible) the widget.
```py
button.state(['disabled']) # set the disabled flag, disabling the button
button.state(['!disabled']) # clear the disabled flag
button.instate(['disabled']) # return true if the button is disabled, else false
button.instate(['!disabled']) # return true if the button is not disabled, else false
button.instate(['!disabled'], cmd) # execute 'cmd' if the button is not disabled
# WIDGET STATE FLAGS: active, disabled, focus, pressed, selected, background, readonly, alternate, invalid
```
## CHECKBUTTON
Button with binary value of some kind (e.g a toggle) and also invokes a comman callback
```py
checkbutton_var = TkVarType
check = ttk.Checkbutton(parent, text='button text', command=action_performed, variable=checkbutton_var, onvalue=value_on, offvalue=value_off)
```
### WIDGET VALUE
Variable option holds value of button, updated by widget toggle.
EFAULT: 1 (while checked), 0 (while unchecked)
`onvalue`, `offvalue` are used to personalize cheched and uncheched values
if linked variable is empry or different from on-offvalue the state flag is set to alternate
checkbutton won't set the linked variable (MUST be done in the program)
### CONFIG OPTIONS
```py
check['text/textvariable']
check['image']
check['compound']
check.state(['flag'])
check.instate(['flag'])
```
## RADIOBUTTON
Multiple-choice selection (good if options are few).
```py
#RADIOBUTTON CREATION (usually as a set)
radio_var = TkVarType
radio_1 = ttk.Radiobutton(parent, text='button text', variable=radio_var, value=button_1_value)
radio_2 = ttk.Radiobutton(parent, text='button text', variable=radio_var, value=button_2_value)
radio_3 = ttk.Radiobutton(parent, text='button text', variable=radio_var, value=button_3_value)
# if linked value does not exist flag state is alternate
# CONFIG OPTIONS
radio['text/textvariable']
radio['image']
radio['compound']
radio.state(['flag'])
radio.instate(['flag'])
```
## ENTRY
Single line text field accepting a string.
```py
entry_var = StringVar()
entry = ttk.Entry(parent, textvariable=entry_var, width=char_num, show=symbol)
# SHOW: replaces the entry test with symbol, used for password
# entries don't have an associated label, needs a separate widget
```
### CHANGE ENTRY VALUE
```py
entry.get() # returns entry value
entry.delete(start, 'end') # delete between two indices, 0-based
entry.insert(index, 'text value') # insert new text at a given index
```
### ENTRY CONFIG OPTIONS
```py
radio.state(['flag'])
radio.instate(['flag'])
```
## COMBOBOX
Drop-down list of avaiable options.
```py
combobox_var = StringVar()
combo = ttk.Combobox(parent, textvariable=combobox_var)
combobox.get() # return combobox current value
combobox.set(value) # set combobox new value
combobox.current() # returns which item in the predefined values list is selected (0-based index of the provided list, -1 if not in the list)
#combobox will generate a bind-able <ComboxboSelected> virtual event whenever the value changes
combobox.bind('<<ComboboxSelected>>', function)
```
### PREDEFINED VALUES
```py
combobox['values'] = (value_1, value_2, ...) # provides a list of choose-able values
combobox.state(['readonly']) # restricts choose-able values to those provided with 'values' config option
# SUGGESTION: call selection clear method on value change (on ComboboxSelected event) to avoid visual oddities
```
## LISTBOX (Tk Classic)
Display list of single-line items, allows browsing and multiple selection (part og Tk classic, missing in themed Tk widgets).
```py
lstbx = Listbox(parent, height=num, listvariable=item_list:list)
# listvariable links a variable (MUST BE a list) to the listbox, each elemenmt is a item of the listbox
# manipulation of the list changes the listbox
```
### SELECTING ITEMS
```py
lstbx['selectmode'] = mode # MODE: browse (single selection), extended (multiple selection)
lstbx.curselection() # returns list of indices of selected items
# on selection change: generate event <ListboxSelect>
# often each string in the program is associated with some other data item
# keep a second list, parallel to the list of strings displayed in the listbox, which will hold the associated objects
# (association by index with .curselection() or with a dict).
```
## SCROLLBAR
```py
scroll = ttk.Scrollbar(parent, orient=direction, command=widget.view)
# ORIENT: VERTICAL, HORIZONTAL
# WIDGET.VIEW: .xview, .yview
# NEEDS ASSOCITED WIDGET SCROLL CONFIG
widget.configure(xscrollcommand=scroll.set)
widget.configure(yscrollcommand=scroll.set)
```
## SIZEGRIP
Box in right bottom of widget, allows resize.
```py
ttk.Sizegrip(parent).grid(column=999, row=999, sticky=(S, E))
```
## TEXT (Tk Classic)
Area accepting multiple line of text.
```py
txt = Text(parent, width=num:int, height=num:int, wrap=flag) # width is character num, height is row num
# FLAG: none (no wrapping), char (wrap at every character), word (wrap at word boundaries)
txt['state'] = flag # FLAG: disabled, normal
# accepts commands xscrollcommand and yscrollcommandand and yview, xview methods
txt.see(line_num.char_num) # ensure that given line is visible (line is 1-based, char is 0-based)
txt.get( index, string) # insert string in pos index (index = line.char), 'end' is shortcut for end of text
txt.delete(start, end) # delete range of text
```
## PROGRESSBAR
Feedback about progress of lenghty operation.
```py
progbar = ttk.Progressbar(parent, orient=direction, lenght=num:int, value=num, maximum=num:float mode=mode)
# DIRECTION: VERTICAL, HORIZONTAL
# MODE: determinate (relative progress of completion), indeterminate (no estimate of completion)
# LENGHT: dimension in pixel
# VALUE: sets the progress, updates the bar as it changes
# MAXIMUM: total number of steps (DEFAULT: 100)
```
### DETERMINATE PROGRESS
```py
progbar.step(amount) # increment value of given amount (DEFAULT: 1.0)
```
### INDETERMINATE PROGRESS
```py
progbar.start() # starts progressbar
progbar.stop() #stopos progressbar
```
## SCALE
Provide a numeric value through direct manipulation.
```py
scale = ttk.Scale(parent, orient=DIR, lenght=num:int, from_=num:float, to=num:float, command=cmd)
# COMMAND: calls cmd at every scale change, appends current value to func call
scale['value'] # set or read current value
scale.set(value) # set current value
scale.get() # get current value
```
## SPINBOX
Choose numbers. The spinbox choses item from a list, arrows permit cycling lits items.
```py
spinval = StringVar()
spin = Spinbox(parent, from_=num, to=num, textvariable=spinval, increment=num, value=lst, wrap=boolean)
# INCREMENT specifies incremend\decremenment by arrow button
# VALUE: list of items associated with the spinbox
# WRAP: boolean value determining if value shuld wrap around if beyond start-end value
```
## GRID GEOMETRY MANAGER
Widgets are assigned a "column" number and a "row" number, which indicates their relative position to each other.
Column and row numbers must be integers, with the first column and row starting at 0.
Gaps in column and row numbers are handy to add more widgets in the middle of the user interface at a later time.
The width of each column (or height of each row) depends on the width or height of the widgets contained within the column or row.
Widgets can take up more than a single cell in the grid ("columnspan" and "rowspan" options).
### LAYOUT WITHIN CELL
By default, if a cell is larger than the widget contained in it, the widget will be centered within it,
both horizontally and vertically, with the master's background showing in the empty space around it.
The "sticky" option can be used to change this default behavior.
The value of the "sticky" option is a string of 0 or more of the compass directions "nsew", specifying which edges of the cell the widget should be "stuck" to.
Specifying two opposite edges means that the widget will be stretched so it is stuck to both.
Specifying "nsew" it will stick to every side.
### HANDLING RESIZE
Every column and row has a "weight" grid option associated with it, which tells it how much it should grow if there is extra room in the master to fill.
By default, the weight of each column or row is 0, meaning don't expand to fill space.
This is done using the "columnconfigure" and "rowconfigure" methods of grid.
Both "columnconfigure" and "rowconfigure" also take a "minsize" grid option, which specifies a minimum size.
### PADDING
Normally, each column or row will be directly adjacent to the next, so that widgets will be right next to each other.
"padx" puts a bit of extra space to the left and right of the widget, while "pady" adds extra space top and bottom.
A single value for the option puts the same padding on both left and right (or top and bottom),
while a two-value list lets you put different amounts on left and right (or top and bottom).
To add padding around an entire row or column, the "columnconfigure" and "rowconfigure" methods accept a "pad" option.
```py
widget.grid(column=num, row=num, columnspan=num, rowspan=num, sticky=(), padx=num, pady=num) # sticky: N, S, E, W
widget.columnconfigure(pad=num, weight=num)
widget.rowconfigure(pad=num, weight=num)
widget.grid_slaves() # returns map, list of widgets inside a master
widget.grid_info() # returns list of grid options
widget.grid_configure() # change one or more option
widget.grid_forget(slaves) # takes a list of slaves, removes slaves from grid (forgets slaves options)
widget.grid_remove(slaves) # takes a list of slaves, removes slaves from grid (remembers slaves options)
```
## WINDOWS AND DIALOGS
### CREATING TOPLEVEL WINDOW
```py
tlw = Toplevel(parent) # parent of root window, no need to grid it
window.destroy()
# can destroy every widget
# destroing parent also destroys it's children
```
### CHANGING BEHAVIOR AND STYLE
```py
# WINDOW TILE
window.title() # returns title of the window
window.title('new title') # sets title
# SIZE AND LOCATION
window.geometry(geo_specs)
'''full geomtry specification: width * height +- x +- y (actual coordinates of screen)
+x --> x pixels from left edge
-x --> x pixels from right edge
+y --> y pixels from top edge
-y --> y pixels from bottom edge'''
# STACKING ORDER
# current stacking order (list from lowest to highest) --- NOT CLEANLY EXPOSED THROUGH TK API
root.tk.eval('wm stackorder ' + str(window))
# check if window is above or below
if (root.tk.eval('wm stackorder '+str(window)+' isabove '+str(otherwindow))=='1')
if (root.tk.eval('wm stackorder '+str(window)+' isbelow '+str(otherwindow))=='1')
# raise or lower windows
window.lift() # absolute position
window.lift(otherwin) # relative to other window
window.lower() # absolute position
window.lower(otherwin) # relative to other window
# RESIZION BEHAVIOR
window.resizable(boolean, boolean) # sets if resizable in width (1st param) and width (2nd param)
window.minsize(num, num) # sets min width and height
window.maxsize(num, num) # sets max width and height
# ICONIFYING AND WITHDRAWING
# WINDOW STATE: normal. iconic (iconified window), withdrawn, icon, zoomed
window.state() # returns current window state
window.state('state') # sets window state
window.iconify() # iconifies window
window.deiconify() # deiconifies window
```
### STANDARD DIALOGS
```py
# SLECTING FILE AND DIRECTORIES
# on Windows and Mac invokes underlying OS dialogs directly
from tkinter import filedialog
filename = filedialog.askopenfilename()
filename = filedialog.asksaveasfilename()
dirname = filedialog.askdirectory()
'''All of these commands produce modal dialogs, which means that the commands (and hence the program) will not continue running until the user submits the dialog.
The commands return the full pathname of the file or directory the user has chosen, or return an empty string if the user cancels out of the dialog.'''
# SELECTING COLORS
from tkinter import colorchooser
# returns HEX color code, INITIALCOLOR: exiting color, presumably to replace
colorchooser.askcolor(initialcolor=hex_color_code)
# ALERT AND COMFIRMATION DIALOGS
from tkinter import messagebox
messagebox.showinfo(title="title", message='text') # simple box with message and OK button
messagebox.showerror(title="title", message='text')
messagebox.showwarning(title="title", message='text')
messagebox.askyesno(title="title", message='text', detail='secondary text' icon='icon')
messagebor.askokcancel(message='text', icon='icon', title='title', detail='secondary text', default=button) # DEFAULT: default button, ok or cancel
messagebox.akdquestion(title="title", message='text', detail='secondary text', icon='icon')
messagebox.askretrycancel(title="title", message='text', detail='secondary text', icon='icon')
messagebox.askyesnocancel(title="title", message='text', detail='secondary text', icon='icon')
# ICON: info (default), error, question, warning
```
POSSIBLE ALERT/CONFIRMATION RETURN VALUES:
- `ok (default)` -- "ok"
- `okcancel` -- "ok" or "cancel"
- `yesno` -- "yes" or "no"
- `yesnocancel` -- "yes", "no" or "cancel"
- `retrycancel` -- "retry" or "cancel"
## SEPARATOR
```py
# horizontal or vertical line between groups of widgets
separator = ttk.Separator(parent, orient=direction)
# DIRECTION: horizontal, vertical
'''LABEL FRAME'''
# labelled frame, used to group widgets
lf = ttk.LabelFrame(parent, text='label')
'''PANED WINDOWS'''
# stack multimple resizable widgets
# panes ara adjustable (drag sash between panes)
pw = ttk.PanedWindow(parent, orient=direction)
# DIRECTION: horizontal, vertical
lf1 = ttk.LabelFrame(...)
lf2 = ttk.LabelFrame(...)
pw.add(lf1) # add widget to paned window
pw.add(lf2)
pw.insert(position, subwindow) # insert widget at given position in list of panes (0, ..., n-1)
pw.forget(subwindow) # remove widget from pane
pw.forget(position) # remove widget from pane
```
### NOTEBOOK
Allows switching between multiple pages
```py
nb = ttk.Notebook(parent)
f1 = ttk.Frame(parent, ...) # child of notebook
f2 = ttk.Frame(parent, ...)
nb.add(subwindow, text='page title', state=flag)
# TEXT: name of page, STATE: normal, dusabled (not selectable), hidden
nb.insert(position, subwindow, option=value)
nb.forget(subwindow)
nb.forget(position)
nb.tabs() # retrieve all tabs
nb.select() # return current tab
nb.select(position/subwindow) # change current tab
nb.tab(tabid, option) # retrieve tab (TABID: position or subwindow) option
nb.tab(tabid, option=value) # change tab option
```
#### FONTS, COLORS, IMAGES
#### NAMED FONTS
Creation of personalized fonts
```py
from tkinter import font
font_name = font.Font(family='font_family', size=num, weight='bold/normal', slant='roman/italic', underline=boolean, overstrike=boolean)
# FAMILY: Courier, Times, Helvetica (support guaranteed)
font.families() # all avaiable font families
```
#### COLORS
Specified w/ HEX RGB codes.
#### IMAGES
imgobj = PhotoImage(file='filename')
label['image'] = imgobj
#### IMAGES W/ Pillow
```py
from PIL import ImageTk, Image
myimg = ImageTk.PhotoImage(Image.open('filename'))
```

BIN
Python/Libs/GUI/tkinter.pdf Normal file

Binary file not shown.

View file

@ -0,0 +1,12 @@
# OpenCV Lib
## Bascis
### Read Image & Video
```py
import cv2 as cv
img = cv.imread("filename") # read and save the image as matrix of pixels
cv.imgshow("Window Name", image) # show an image in a named window (takes name and pixel matrix)
```

View file

@ -0,0 +1,50 @@
# Pillow Library Cheat Sheet
## Standard Imports
```py
from PIL import Image
```
## OPENING IMAGE FILE
Returns `IOError` if file cannot be opened.
```py
image = Image.open(filepath, mode) # open image file (returns Image object)
# FILEPATH: filename (string) or file object (musk implement seek, tell, write methods)
image.format # image file extension
image.size # 2-tuple (width, height) in pixels
image.mode # defines number and name of bands in image, pixeld type and depth
```
## SAVING IMAGE FILE
```py
image.save(filepath, fmt)
# FMT: optional format override
```
## IMAGE CROPPING
```py
box = (left, top, right, bottom) # position in pixels
cropped = image.crop(box)
```
## IMAGE PASTE
```èy
# region dimension MUST be same as box
image.paste(region, box)
```
## SPLITTING AND MERGING BANDS
`image.mode` should be RGB
```py
r, g, b = image.split()
img = image.merge(r, g, b)
```

View file

@ -0,0 +1,146 @@
# PyCarto Cheat Sheet
## Definitions
To do some drawing in PyCairo, we must first create a `Drawing Context`.
The drawing context holds all of the graphics state parameters that describe how drawing is to be done.
This includes information such as line width, color, the surface to draw to, and many other things.
It allows the actual drawing functions to take fewer arguments to simplify the interface.
A `Path` is a collection of points used to create primitive shapes such as lines, arcs, and curves. There are two kinds of paths: open and closed paths.
In a closed path, starting and ending points meet. In an open path, starting and ending point do not meet. In PyCairo, we start with an empty path.
First, we define a path and then we make them visible by stroking and/or filling them. After each `stroke()` or `fill()` method call, the path is emptied.
We have to define a new path. If we want to keep the existing path for later drawing, we can use the `stroke_preserve()` and `fill_preserve()` methods.
A path is made of subpaths.
A `Source` is the paint we use in drawing. We can compare the source to a pen or ink that we use to draw the outlines and fill the shapes.
There are four kinds of basic sources: colors, gradients, patterns, and images.
A `Surface` is a destination that we are drawing to. We can render documents using the PDF or PostScript surfaces, directly draw to a platform via the Xlib and Win32 surfaces.
Before the source is applied to the surface, it is filtered first. The `Mask` is used as a filter.
It determines where the source is applied and where not. Opaque parts of the mask allow to copy the source.
Transparent parts do not let to copy the source to the surface.
A `Pattern` represents a source when drawing onto a surface.
In PyCairo, a pattern is something that you can read from and that is used as the source or mask of a drawing operation.
Patterns can be solid, surface-based, or gradients.
## Initial Settings
### Context and Surface Setup
```py
surface = cairo.ImageSurface(FORMAT, width, height) # surface setup
context = cairo.Context(surface) # drawing context setup
```
Formats:
* `FORMAT_ARGB32`:
each pixel is a 32-bit quantity, with alpha in the upper 8 bits, then red, then green, then blue.
The 32-bit quantities are stored native-endian. Pre-multiplied alpha is used.
(That is, 50% transparent red is 0x80800000, not 0x80ff0000.)
* `FORMAT_RGB24`:
each pixel is a 32-bit quantity, with the upper 8 bits unused.
Red, Green, and Blue are stored in the remaining 24 bits in that order.
* `FORMAT_A8`:
each pixel is a 8-bit quantity holding an alpha value.
* `FORMAT_A1`:
each pixel is a 1-bit quantity holding an alpha value. Pixels are packed together into 32-bit quantities.
The ordering of the bits matches the endianess of the platform.
On a big-endian machine, the first pixel is in the uppermost bit, on a little-endian machine the first pixel is in the least-significant bit.
* `FORMAT_RGB16_565`:
each pixel is a 16-bit quantity with red in the upper 5 bits, then green in the middle 6 bits, and blue in the lower 5 bits.
### Source Setup
```py
# Sets the source pattern within Context to an opaque color.
# This opaque color will then be used for any subsequent drawing operation until a new source pattern is set.
context.set_source_rgb(red, green, blue)
# The color components are floating point numbers in the range 0 to 1.
# The default source pattern is opaque black -- set_source_rgb(0.0, 0.0, 0.0).
```
## Drawing
### Lines and Arcs
`context.move_to(x, y)` begins a new sub-path. After this call the current point will be `(x, y)`.
`context.line_to(x, y)` adds a line to the path from the current position to `(x, y)`
### Path
`context.new_path()` clears current PATH. After this call there will be no path and no current point.
`context.new_sub_path()` begins a new sub-path. Note that the existing path is not affected. After this call there will be no current point.
In many cases, this call is not needed since new sub-paths are frequently started with `Context.move_to()`.
A call to `new_sub_path()` is particularly useful when beginning a new sub-path with one of the `Context.arc()` calls.
This makes things easier as it is no longer necessary to manually compute the arcs initial coordinates for a call to `Context.move_to()`.
### Stroke
A drawing operator that strokes the current path according to the current line width, line join, line cap, and dash settings.
After `stroke()`, the current path will be cleared from the cairo context.
### Fill
A drawing operator that fills the current path according to the current *fill rule*.
(each sub-path is implicitly closed before being filled).
After `fill()`, the current path will be cleared from the Context.
`context.set_fill_rule(fill_rule)` set a FILL RULE to the cairo context.
For both fill rules, whether or not a point is included in the fill is determined by taking a ray from that point to infinity and looking at intersections with the path.
The ray can be in any direction, as long as it doesnt pass through the end point of a segment or have a tricky intersection such as intersecting tangent to the path.
(Note that filling is not actually implemented in this way. This is just a description of the rule that is applied.)
* `cairo.FILL_RULE_WINDING` (default):
If the path crosses the ray from left-to-right, counts +1. If the path crosses the ray from right to left, counts -1.
(Left and right are determined from the perspective of looking along the ray from the starting point.)
If the total count is non-zero, the point will be filled.
* `cairo.FILL_RULE_EVEN_ODD`:
Counts the total number of intersections, without regard to the orientation of the contour.
If the total number of intersections is odd, the point will be filled.
## Writing
```py
surface = cairo.ImageSurface(FORMAT, width, height) # surface setup
context = cairo.Context(surface) # drawing context setup
# Replaces the current FontFace object in the Context.
context.set_font_face(family, slant, weight)
context.set_font_size() # float -- he new font size, in user space units. DEFAULT 10.0
context.show_text(string)
```
Font Slants:
* `FONT_SLANT_NORMAL` (default)
* `FONT_SLANT_ITALIC`
* `FONT_SLANT_OBLIQUE`
Font Weights:
* `FONT_WEIGHT_NORMAL` (default)
* `FONT_WEIGHT_BOLD`
## Creating the image
```py
surface.show_page() # Emits and clears the current page for backends that support multiple pages. Use copy_page() if you dont want to clear the page.
surface.copy_page() # Emits the current page for backends that support multiple pages, but doesnt clear it, so that the contents of the current page will be retained for the next page. Use show_page() if you want to get an empty page after the emission.
surface.write_to_png("filename") # Writes the contents of Surface to filename as a PNG image
```

328
Python/Libs/Math/NumPy.md Normal file
View file

@ -0,0 +1,328 @@
# NumPy Lib
## MOST IMPORTANT ATTRIBUTES ATTRIBUTES
```py
array.ndim # number of axes (dimensions) of the array
array.shape # dimensions of the array, tuple of integers
array.size # total number of elements in the array
array.itemsize # size in bytes of each element
array.data # buffer containing the array elements
```
## ARRAY CREATION
Unless explicitly specified `np.array` tries to infer a good data type for the array that it creates.
The data type is stored in a special dtype object.
```py
var = np.array(sequence) # createa array
var = np.asarray(sequence) # convert input to array
var = np.ndarray(*sequence) # creates multidimensional array
var = np.asanyarray(*sequence) # convert the input to an ndarray
# nested sequences will be converted to multidimensional array
var = np.zeros(ndarray.shape) # array with all zeros
var = np.ones(ndarray.shape) # array with all ones
var = np.empty(ndarray.shape) # array with random values
var = np.identity(n) # identity array (n x n)
var = np.arange(start, stop, step) # creates an array with parameters specified
var = np.linspace(start, stop, num_of_elements) # step of elements calculated based on parameters
```
## DATA TYPES FOR NDARRAYS
```py
var = array.astype(np.dtype) # copy of the array, cast to a specified type
# return TypeError if casting fails
```
The numerical `dtypes` are named the same way: a type name followed by a number indicating the number of bits per element.
| TYPE | TYPE CODE | DESCRIPTION |
|-----------------------------------|--------------|--------------------------------------------------------------------------------------------|
| int8, uint8 | i1, u1 | Signed and unsigned 8-bit (1 byte) integer types |
| int16, uint16 | i2, u2 | Signed and unsigned 16-bit integer types |
| int32, uint32 | i4, u4 | Signed and unsigned 32-bit integer types |
| int64, uint64 | i8, u8 | Signed and unsigned 32-bit integer types |
| float16 | f2 | Half-precision floating point |
| float32 | f4 or f | Standard single-precision floating point. Compatible with C float |
| float64, float128 | f8 or d | Standard double-precision floating point. Compatible with C double and Python float object |
| float128 | f16 or g | Extended-precision floating point |
| complex64, complex128, complex256 | c8, c16, c32 | Complex numbers represented by two 32, 64, or 128 floats, respectively |
| bool | ? | Boolean type storing True and False values |
| object | O | Python object type |
| string_ | `S<num>` | Fixed-length string type (1 byte per character), `<num>` is string lenght |
| unicode_ | `U<num>` | Fixed-length unicode type, `<num>` is lenght |
## OPERATIONS BETWEEN ARRAYS AND SCALARS
Any arithmetic operations between equal-size arrays applies the operation elementwise.
array `+` scalar --> element-wise addition (`[1, 2, 3] + 2 = [3, 4, 5]`)
array `-` scalar --> element-wise subtraction (`[1 , 2, 3] - 2 = [-2, 0, 1]`)
array `*` scalar --> element-wise multiplication (`[1, 2, 3] * 3 = [3, 6, 9]`)
array / scalar --> element-wise division (`[1, 2, 3] / 2 = [0.5 , 1 , 1.5]`)
array_1 `+` array_2 --> element-wise addition (`[1, 2, 3] + [1, 2, 3] = [2, 4, 6]`)
array_1 `-` array_2 --> element-wise subtraction (`[1, 2, 4] - [3 , 2, 1] = [-2, 0, 2]`)
array_1 `*` array_2 --> element-wise multiplication (`[1, 2, 3] * [3, 2, 1] = [3, 4, 3]`)
array_1 `/` array_2 --> element-wise division (`[1, 2, 3] / [3, 2, 1] = [0.33, 1, 3]`)
## SHAPE MANIPULATION
```py
np.reshape(array, newshape) # changes the shape of the array
np.ravel(array) # returns the array flattened
array.resize(shape) # modifies the array itself
array.T # returns the array transposed
np.transpose(array) # returns the array transposed
np.swapaxes(array, first_axis, second_axis) # interchange two axes of an array
# if array is an ndarray, then a view of it is returned; otherwise a new array is created
```
## JOINING ARRAYS
```py
np.vstack((array1, array2)) # takes tuple, vertical stack of arrays (column wise)
np.hstack((array1, array2)) # takes a tuple, horizontal stack of arrays (row wise)
np.dstack((array1, array2)) # takes a tuple, depth wise stack of arrays (3rd dimesion)
np.stack(*arrays, axis) # joins a sequence of arrays along a new axis (axis is an int)
np.concatenate((array1, array2, ...), axis) # joins a sequence of arrays along an existing axis (axis is an int)
```
## SPLITTING ARRAYS
```py
np.split(array, indices) # splits an array into equalli long sub-arrays (indices is int), if not possible raises error
np.vsplit(array, indices) # splits an array equally into sub-arrays vertically (row wise) if not possible raises error
np.hsplit(array, indices) # splits an array equally into sub-arrays horizontally (column wise) if not possible raises error
np.dsplit(array, indices) # splits an array into equally sub-arrays along the 3rd axis (depth) if not possible raises error
np.array_split(array, indices) # splits an array into sub-arrays, arrays can be of different lenghts
```
## VIEW()
```py
var = array.view() # creates a new array that looks at the same data
# slicinga returnas a view
# view shapes are separated but assignement changes all arrays
```
## COPY()
```py
var = array.copy() # creates a deepcopy of the array
```
## INDEXING, SLICING, ITERATING
1-dimensional --> sliced, iterated and indexed as standard
n-dimensinal --> one index per axis, index given in tuple separated by commas `[i, j] (i, j)`
dots (`...`) represent as meny colons as needed to produce complete indexing tuple
- `x[1, 2, ...] == [1, 2, :, :, :]`
- `x[..., 3] == [:, :, :, :, 3]`
- `x[4, ..., 5, :] == [4, :, :, 5, :]`
iteration on first index, use .flat() to iterate over each element
- `x[*bool]` returns row with corresponding True index
- `x[condition]` return only elements that satisfy condition
- x`[[*index]]` return rows ordered by indexes
- `x[[*i], [*j]]` return elements selected by tuple (i, j)
- `x[ np.ix_( [*i], [*j] ) ]` return rectangular region
## UNIVERSAL FUNCTIONS (ufunc)
Functions that performs elemen-wise operations (vectorization).
```py
np.abs(array) # vectorized abs(), return element absolute value
np.fabs(array) # faster abs() for non-complex values
np.sqrt(array) # vectorized square root (x^0.5)
np.square(array) # vectorized square (x^2)
np.exp(array) # vectorized natural exponentiation (e^x)
np.log(array) # vectorized natural log(x)
np.log10(array) # vectorized log10(x)
np.log2(array) # vectorized log2(x)
np.log1p(array) # vectorized log(1 + x)
np.sign(array) # vectorized sign (1, 0, -1)
np.ceil(array) # vectorized ceil()
np.floor(array) # vectorized floor()
np.rint(array) # vectorized round() to nearest int
np.modf(array) # vectorized divmod(), returns the fractional and integral parts of element
np.isnan(array) # vectorized x == NaN, return bollean array
np.isinf(array) # vectorized test for positive or negative infinity, return boolean array
np.isfineite(array) # vectorized test fo finiteness, returns boolean array
np.cos(array) # vectorized cos(x)
np.sin(array) # vectorized sin(x)
np.tan(array) # vectorized tan(x)
np.cosh(array) # vectorized cosh(x)
np.sinh(array) # vector sinh(x)
np.tanh(array) # vectorized tanh(x)
np.arccos(array) # vectorized arccos(x)
np.arcsinh(array) # vectorized arcsinh(x)
np.arctan(array) # vectorized arctan(x)
np.arccosh(array) # vectorized arccos(x)
np.arcsinh(array) # vectorized arcsin(x)
np.arctanh(array) # vectorized arctanh(x)
np.logical_not(array) # vectorized not(x), equivalent to -array
np.add(x_array, y_array) # vectorized addition
np.subtract(x_array, y_array) # vectorized subtraction
np.multiply(x_array, y_array) # vectorized multiplication
np.divide(x_array, y_array) # vectorized division
np.floor_divide(x_array, y_array) # vectorized floor division
np.power(x_array, y_array) # vectorized power
np.maximum(x_array, y_array) # vectorized maximum
np.minimum(x_array, y_array) # vectorized minimum
np.fmax(x_array, y_array) # vectorized maximum, ignores NaN
np.fmin(x_array, y_array) # vectorized minimum, ignores NaN
np.mod(x_array, y_array) # vectorized modulus
np.copysign(x_array, y_array) # vectorized copy sign from y_array to x_array
np.greater(x_array, y_array) # vectorized x > y
np.less(x_array, y_array) # vectorized x < y
np.greter_equal(x_array, y_array) # vectorized x >= y
np.less_equal(x_array, y_array) # vectorized x <= y
np.equal(x_array, y_array) # vectorized x == y
np.not_equal(x_array, y_array) # vectorized x != y
np.logical_and(x_array, y_array) # vectorized x & y
np.logical_or(x_array, y_array) # vectorized x | y
np.logical_xor(x_array, y_array) # vectorized x ^ y
```
## CONDITIONAL LOGIC AS ARRAY OPERATIONS
```py
np.where(condition, x, y) # return x if condition == True, y otherwise
```
## MATHEMATICAL AND STATISTICAL METHODS
`np.method(array, args)` or `array.method(args)`.
Boolean values are coerced to 1 (`True`) and 0 (`False`).
```py
np.sum(array, axis=None) # sum of array elements over a given axis
np.median(array, axis=None) # median along the specified axis
np.mean(array, axis=None) # arithmetic mean along the specified axis
np.average(array, axis=None) # weighted average along the specified axis
np.std(array, axis=None) # standard deviation along the specified axis
np.var(array, axis=None) # variance along the specified axis
np.min(array, axis=None) # minimum value along the specified axis
np.max(array, axis=None) # maximum value along the specified axis
np.argmin(array, axis=None) # indices of the minimum values along an axis
np.argmax(array, axis=None) # indices of the maximum values
np.cumsum(array, axis=None) # cumulative sum of the elements along a given axis
np.cumprod(array, axis=None) # cumulative sum of the elements along a given axis
```
## METHODS FOR BOOLEAN ARRAYS
```py
np.all(array, axis=None) # test whether all array elements along a given axis evaluate to True
np.any(array, axis=None) # test whether any array element along a given axis evaluates to True
```
## SORTING
```py
array.sort(axis=-1) # sort an array in-place (axis = None applies on flattened array)
np.sort(array, axis=-1) # return a sorted copy of an array (axis = None applies on flattened array)
```
## SET LOGIC
```py
np.unique(array) # sorted unique elements of an array
np.intersect1d(x, y) # sorted common elements in x and y
np.union1d(x, y) # sorte union of elements
np.in1d(x, y) # boolean array indicating whether each element of x is contained in y
np.setdiff1d(x, y) # Set difference, elements in x that are not in y
np.setxor1d() # Set symmetric differences; elements that are in either of the arrays, but not both
```
## FILE I/O WITH ARRAYS
```py
np.save(file, array) # save array to binary file in .npy fromat
np.savez(file, *array) # saveseveral arrays into a single file in uncompressed .npz format
np.savez_compressed(file, *args, *kwargs) # save several arrays into a single file in compressed .npz format
# *ARGS: arrays to save to the file. arrays will be saved with names “arr_0”, “arr_1”, and so on
# **KWARGS: arrays to save to the file. arrays will be saved in the file with the keyword names
np.savetxt(file, X, fmt="%.18e", delimiter=" ") # save arry to text file
# X: 1D or 2D
# FMT: Python Format Specification Mini-Language
# DELIMITER: {str} -- string used to separate values
np.load(file, allow_pickle=False) # load arrays or pickled objects from .npy, .npz or pickled files
np.loadtxt(file, dtype=float, comments="#", delimiter=None)
# DTYPE: {data type} -- data-type of the resulting array
# COMMENTS: {str} -- characters used to indicate the start of a comment. None implies no comments
# DELIMITER: {str} -- string used to separate values
```
## LINEAR ALGEBRA
```py
np.diag(array, k=0) # extract a diagonal or construct a diagonal array
# K: {int} -- k>0 diagonals above main diagonal, k<0 diagonals below main diagonal (main diagonal k = 0)
np.dot(x ,y) # matrix dot product
np.trace(array, offset=0, dtype=None, out=None) # return the sum along diagonals of the array
# OFFSET: {int} -- offest of the diagonal from the main diagonal
# dtype: {dtype} -- determines the data-type of the returned array
# OUT: {ndarray} -- array into which the output is placed
np.linalg.det(A) # compute the determinant of an array
np.linalg.eig(A) # compute the eigenvalues and right eigenvectors of a square array
np.linalg.inv(A) # compute the (multiplicative) inverse of a matrix
# Ainv satisfies dot(A, Ainv) = dor(Ainv, A) = eye(A.shape[0])
np.linalg.pinv(A) # compute the (Moore-Penrose) pseudo-inverse of a matrix
np.linalg.qr() # factor the matrix a as qr, where q is orthonormal and r is upper-triangular
np.linalg.svd(A) # Singular Value Decomposition
np.linalg.solve(A, B) # solve a linear matrix equation, or system of linear scalar equations AX = B
np.linalg.lstsq(A, B) # return the least-squares solution to a linear matrix equation AX = B
```
## RANDOM NUMBER GENERATION
```py
np.random.seed()
np.random.rand()
np.random.randn()
np.random.randint()
np.random.Generator.permutation(x) # randomly permute a sequence, or return a permuted range
np.random.Generator.shuffle(x) # Modify a sequence in-place by shuffling its contents
np.random.Generator.beta(a, b, size=None) # draw samples from a Beta distribution
# A: {float, array floats} -- Alpha, > 0
# B: {int, tuple ints} -- Beta, > 0
np.random.Generator.binomial(n, p, size=None) # draw samples from a binomial distribution
# N: {int, array ints} -- parameter of the distribution, >= 0
# P: {float, attay floats} -- Parameter of the distribution, >= 0 and <= 1
np.random.Generator.chisquare(df, size=None)
# DF: {float, array floats} -- degrees of freedom, > 0
np.random.Generator.gamma(shape, scale=1.0, size=None) # draw samples from a Gamma distribution
# SHAPE: {flaot, array floats} -- shape of the gamma distribution, != 0
np.random.Generator.normal(loc=0.0, scale=1.0, Size=None) # draw random samples from a normal (Gaussian) distribution
# LOC: {float, all floats} -- mean ("centre") of distribution
# SCALE: {float, all floats} -- standard deviation of distribution, != 0
np.random.Generator.poisson(lam=1.0, size=None) # draw samples from a Poisson distribution
# LAM: {float, all floats} -- expectation of interval, >= 0
np.random.Generator.uniform(low=0.0,high=1.0, size=None) # draw samples from a uniform distribution
# LOW: {float, all floats} -- lower boundary of the output interval
# HIGH: {float, all floats} -- upper boundary of the output interval
np.random.Generator.zipf(a, size=None) # draw samples from a Zipf distribution
# A: {float, all floats} -- distribution parameter, > 1
```

View file

@ -0,0 +1,646 @@
# Pandas Lib
## Basic Pandas Imports
```py
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
```
## SERIES
1-dimensional labelled array, axis label referred as INDEX.
Index can contain repetitions.
```py
s = Series(data, index=index, name='name')
# DATA: {python dict, ndarray, scalar value}
# NAME: {string}
s = Series(dict) # Series created from python dict, dict keys become index values
```
### INDEXING / SELECTION / SLICING
```py
s['index'] # selection by index label
s[condition] # return slice selected by condition
s[ : ] # slice endpoin included
s[ : ] = *value # modifi value of entire slice
s[condition] = *value # modify slice by condition
```
## MISSING DATA
Missing data appears as NaN (Not a Number).
```py
pd.isnull(array) # retunn a Series index-bool indicating wich indexes dont have data
pd.notnull(array) # retunn a Series index-bool indicating wich indexes have data
array.isnull()
array.notnull()
```
### SERIES ATTRIBUTES
```py
s.values # NumPy representation of Series
s.index # index object of Series
s.name = "Series name" # renames Series object
s.index.name = "index name" # renames index
```
### SERIES METHODS
```py
pd.Series.isin(self, values) # boolean Series showing whether elements in Series matcheselements in values exactly
# Conform Series to new index, new object produced unless the new index is equivalent to current one and copy=False
pd.Series.reindex(delf, index=None, **kwargs)
# INDEX: {array} -- new labels / index
# METHOD: {none (dont fill gaps), pad (fill or carry values forward), backfill (fill or carry values backward)}-- hole filling method
# COPY: {bool} -- return new object even if index is same -- DEFAULT True
# FILLVALUE: {scalar} --value to use for missing values. DEFAULT NaN
pd.Series.drop(self, index=None, **kwargs) # return Series with specified index labels removed
# INPLACE: {bool} -- if true do operation in place and return None -- DEFAULT False
# ERRORS: {ignore, raise} -- If ignore, suppress error and existing labels are dropped
# KeyError raised if not all of the labels are found in the selected axis
pd.Series.value_counts(self, normalize=False, sort=True, ascending=False, bins=None, dropna=True)
# NORMALIZE: {bool} -- if True then object returned will contain relative frequencies of unique values
# SORT: {bool} -- sort by frequency -- DEFAULT True
# ASCENDING: {bool} -- sort in ascending order -- DEFAULT False
# BINS: {int} -- group values into half-open bins, only works with numeric data
# DROPNA: {bool} -- dont include counts of NaN
```
## DATAFRAME
2-dimensional labeled data structure with columns of potentially different types.
Index and columns can contain repetitions.
```py
df = DataFrame(data, index=row_labels, columns=column_labels)
# DATA: {list, dict (of lists), nested dicts, series, dict of 1D ndarray, 2D ndarray, DataFrame}
# INDEX: {list of row_labels}
# COLUMNS: {list of column_labels}
# outer dict keys interpreted as index labels, inner dict keys interpreted as column labels
# INDEXING / SELECTION / SLICING
df[col] # column selection
df.at[row, col] # access a single value for a row/column label pair
df.iat[row, col] # access a single value for a row/column pair by integer position
df.column_label # column selection
df.loc[label] # row selection by label
df.iloc[loc] # row selection by integer location
df[ : ] # slice rows
df[bool_vec] # slice rows by boolean vector
df[condition] # slice rows by condition
df.loc[:, ["column_1", "column_2"]] # slice columns by names
df.loc[:, [bool_vector]] # slice columns by names
df[col] = *value # modify column contents, if colon is missing it will be created
df[ : ] = *value # modify rows contents
df[condition] = *value # modify contents
del df[col] # delete column
```
### DATAFRAME ATTRIBUTES
```py
df.index # row labels
df.columns # column labels
df.values # NumPy representation of DataFrame
df.index.name = "index name"
df.columns.index.name = "columns name"
df.T # transpose
```
### DATAFRAME METHODS
```py
pd.DataFrame.isin(self , values) # boolean DataFrame showing whether elements in DataFrame matcheselements in values exactly
# Conform DataFrame to new index, new object produced unless the new index is equivalent to current one and copy=False
pd.DataFrame.reindex(self, index=None, columns=None, **kwargs)
# INDEX: {array} -- new labels / index
# COLUMNS: {array} -- new labels / columns
# METHOD: {none (dont fill gaps), pad (fill or carry values forward), backfill (fill or carry values backward)}-- hole filling method
# COPY: {bool} -- return new object even if index is same -- DEFAULT True
# FILLVALUE: {scalar} --value to use for missing values. DEFAULT NaN
pd.DataFrame.drop(self, index=None, columns=None, **kwargs) # Remove rows or columns by specifying label names
# INPLACE: {bool} -- if true do operation in place and return None -- DEFAULT False
# ERRORS: {ignore, raise} -- If ignore, suppress error and existing labels are dropped
# KeyError raised if not all of the labels are found in the selected axis
```
## INDEX OBJECTS
Holds axis labels and metadata, immutable.
### INDEX TYPES
```py
pd.Index # immutable ordered ndarray, sliceable. stortes axis labels
pd.Int64Index # special case of Index with purely integer labels
pd.MultiIndex # multi-level (hierarchical) index object for pandas objects
pd.PeriodINdex # immutable ndarray holding ordinal values indicating regular periods in time
pd.DatetimeIndex # nanosecond timestamps (uses Numpy datetime64)
```
### INDEX ATTRIBUTERS
```py
pd.Index.is_monotonic_increasing # Return True if the index is monotonic increasing (only equal or increasing) values
pd.Index.is_monotonic_decreasing # Return True if the index is monotonic decreasing (only equal or decreasing) values
pd.Index.is_unique # Return True if the index has unique values.
pd.Index.hasnans # Return True if the index has NaNs
```
### INDEX METHODS
```py
pd.Index.append(self, other) # append a collection of Index options together
pd.Index.difference(self, other, sort=None) # set difference of two Index objects
# SORT: {None (attempt sorting), False (dont sort)}
pd.Index.intersection(self, other, sort=None) # set intersection of two Index objects
# SORT: {None (attempt sorting), False (dont sort)}
pd.Index.union(self, other, sort=None) # set union of two Index objects
# SORT: {None (attempt sorting), False (dont sort)}
pd.Index.isin(self, values, level=None) # boolean array indicating where the index values are in values
pd.Index.insert(self, loc, item) # make new Index inserting new item at location
pd.Index.delete(self, loc) # make new Index with passed location(-s) deleted
pd.Index.drop(self, labels, errors='raise') # Make new Index with passed list of labels deleted
# ERRORS: {ignore, raise} -- If ignore, suppress error and existing labels are dropped
# KeyError raised if not all of the labels are found in the selected axis
pd.Index.reindex(self, target, **kwargs) # create index with targets values (move/add/delete values as necessary)
# METHOD: {none (dont fill gaps), pad (fill or carry values forward), backfill (fill or carry values backward)}-- hole filling method
```
## ARITMETHIC OPERATIONS
NumPy arrays operations preserve labels-value link.
Arithmetic operations automatically align differently indexed data.
Missing values propagate in arithmetic computations (NaN `<operator>` value = NaN)
### ADDITION
```py
self + other
pd.Series.add(self, other, fill_value=None) # add(), supports substituion of NaNs
pd,Series.radd(self, other, fill_value=None) # radd(), supports substituion of NaNs
pd.DataFrame.add(self, other, axis=columns, fill_value=None) # add(), supports substituion of NaNs
pd.DataFrame.radd(self, other, axis=columns, fill_value=None) # radd(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### SUBTRACTION
```py
self - other
pd.Series.sub(self, other, fill_value=None) # sub(), supports substituion of NaNs
pd.Series.radd(self, other, fill_value=None) # radd(), supports substituion of NaNs
ps.DataFrame.sub(self, other, axis=columns, fill_value=None) # sub(), supports substituion of NaNs
pd.DataFrame.rsub(self, other, axis=columns, fill_value=None) # rsub(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### MULTIPLICATION
```py
self * other
pd.Series.mul(self, other, fill_value=None) # mul(), supports substituion of NaNs
pd.Series.rmul(self, other, fill_value=None) # rmul(), supports substituion of NaNs
ps.DataFrame.mul(self, other, axis=columns, fill_value=None) # mul(), supports substituion of NaNs
pd.DataFrame.rmul(self, other, axis=columns, fill_value=None) # rmul(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### DIVISION (float division)
```py
self / other
pd.Series.div(self, other, fill_value=None) # div(), supports substituion of NaNs
pd.Series.rdiv(self, other, fill_value=None) # rdiv(), supports substituion of NaNs
pd.Series.truediv(self, other, fill_value=None) # truediv(), supports substituion of NaNs
pd.Series.rtruediv(self, other, fill_value=None) # rtruediv(), supports substituion of NaNs
ps.DataFrame.div(self, other, axis=columns, fill_value=None) # div(), supports substituion of NaNs
pd.DataFrame.rdiv(self, other, axis=columns, fill_value=None) # rdiv(), supports substituion of NaNs
ps.DataFrame.truediv(self, other, axis=columns, fill_value=None) # truediv(), supports substituion of NaNs
pd.DataFrame.rtruediv(self, other, axis=columns, fill_value=None) # rtruediv(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### FLOOR DIVISION
```py
self // other
pd.Series.floordiv(self, other, fill_value=None) # floordiv(), supports substituion of NaNs
pd.Series.rfloordiv(self, other, fill_value=None) # rfloordiv(), supports substituion of NaNs
ps.DataFrame.floordiv(self, other, axis=columns, fill_value=None) # floordiv(), supports substituion of NaNs
pd.DataFrame.rfloordiv(self, other, axis=columns, fill_value=None) # rfloordiv(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### MODULO
```py
self % other
pd.Series.mod(self, other, fill_value=None) # mod(), supports substituion of NaNs
pd.Series.rmod(self, other, fill_value=None) # rmod(), supports substituion of NaNs
ps.DataFrame.mod(self, other, axis=columns, fill_value=None) # mod(), supports substituion of NaNs
pd.DataFrame.rmod(self, other, axis=columns, fill_value=None) # rmod(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### POWER
```py
other ** self
pd.Series.pow(self, other, fill_value=None) # pow(), supports substituion of NaNs
pd.Series.rpow(self, other, fill_value=None) # rpow(), supports substituion of NaNs
ps.DataFrame.pow(self, other, axis=columns, fill_value=None) # pow(), supports substituion of NaNs
pd.DataFrame.rpow(self, other, axis=columns, fill_value=None) # rpow(), supports substituion of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
## ESSENTIAL FUNCTIONALITY
### FUNCTION APPLICATION AND MAPPING
NumPy ufuncs work fine with pandas objects.
```py
pd.DataFrame.applymap(self, func) # apply function element-wise
pd.DataFrame.apply(self, func, axis=0, args=()) # apllay a function along an axis of a DataFrame
# FUNC: {function} -- function to apply
# AXIS: {O, 1, index, columns} -- axis along which the function is applied
# ARGS: {tuple} -- positional arguments to pass to func in addition to the array/series
# SORTING AND RANKING
pd.Series.sort_index(self, ascending=True **kwargs) # sort Series by index labels
pd.Series.sort_values(self, ascending=True, **kwargs) # sort series by the values
# ASCENDING: {bool} -- if True, sort values in ascending order, otherwise descending -- DEFAULT True
# INPALCE: {bool} -- if True, perform operation in-place
# KIND: {quicksort, mergesort, heapsort} -- sorting algorithm
# NA_POSITION {first, last} -- first puts NaNs at the beginning, last puts NaNs at the end
pd.DataFrame.sort_index(self, axis=0, ascending=True, **kwargs) # sort object by labels along an axis
pd.DataFrame.sort_values(self, axis=0, ascending=True, **kwargs) # sort object by values along an axis
# AXIS: {0, 1, index, columns} -- the axis along which to sort
# ASCENDING: {bool} -- if True, sort values in ascending order, otherwise descending -- DEFAULT True
# INPALCE: {bool} -- if True, perform operation in-place
# KIND: {quicksort, mergesort, heapsort} -- sorting algorithm
# NA_POSITION {first, last} -- first puts NaNs at the beginning, last puts NaNs at the end
```
## DESCRIPTIVE AND SUMMARY STATISTICS
### COUNT
```py
pd.Series.count(self) # return number of non-NA/null observations in the Series
pd.DataFrame.count(self, numeric_only=False) # count non-NA cells for each column or row
# NUMERIC_ONLY: {bool} -- Include only float, int or boolean data -- DEFAULT False
```
### DESCRIBE
Generate descriptive statistics summarizing central tendency, dispersion and shape of datasets distribution (exclude NaN).
```py
pd.Series.describe(self, percentiles=None, include=None, exclude=None)
pd.DataFrame.describe(self, percentiles=None, include=None, exclude=None)
# PERCENTILES: {list-like of numbers} -- percentiles to include in output,between 0 and 1 -- DEFAULT [.25, .5, .75]
# INCLUDE: {all, None, list of dtypes} -- white list of dtypes to include in the result. ignored for Series
# EXCLUDE: {None, list of dtypes} -- black list of dtypes to omit from the result. ignored for Series
```
### MAX - MIN
```py
pd.Series.max(self, skipna=None, numeric_only=None) # maximum of the values for the requested axis
pd.Series.min(self, skipna=None, numeric_only=None) # minimum of the values for the requested axis
pd.DataFrame.max(self, axis=None, skipna=None, numeric_only=None) # maximum of the values for the requested axis
pd.DataFrame.min(self, axis=None, skipna=None, numeric_only=None) # minimum of the values for the requested axis
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### IDXMAX - IDXMIN
```py
pd.Series.idxmax(self, skipna=True) # row label of the maximum value
pd.Series.idxmin(self, skipna=True) # row label of the minimum value
pd.DataFrame.idxmax(self, axis=0, skipna=True) # Return index of first occurrence of maximum over requested axis
pd.DataFrame.idxmin(self, axis=0, skipna=True) # Return index of first occurrence of minimum over requested axis
# AXIS:{0, 1, index, columns} -- row-wise or column-wise
# SKIPNA: {bool} -- exclude NA/null values. ff an entire row/column is NA, result will be NA
```
### QUANTILE
```py
pd.Series.quantile(self, q=0.5, interpolation='linear') # return values at the given quantile
pd.DataFrame.quantile(self, q=0.5, axis=0, numeric_only=True, interpolation='linear') # return values at the given quantile over requested axis
# Q: {flaot, array} -- value between 0 <= q <= 1, the quantile(s) to compute -- DEFAULT 0.5 (50%)
# NUMERIC_ONLY: {bool} -- if False, quantile of datetime and timedelta data will be computed as well
# INTERPOLATION: {linear, lower, higher, midpoint, nearest} -- SEE DOCS
```
### SUM
```py
pd.Series.sum(self, skipna=None, numeric_only=None, min_count=0) # sum of the values
pd.DataFrame.sum(self, axis=None, skipna=None, numeric_only=None, min_count=0) # sum of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
# MIN_COUNT: {int} -- required number of valid values to perform the operation. if fewer than min_count non-NA values are present the result will be NA
```
### MEAN
```py
pd.Series.mean(self, skipna=None, numeric_only=None) # mean of the values
pd.DataFrame.mean(self, axis=None, skipna=None, numeric_only=None) # mean of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### MEDIAN
```py
pd.Series.median(self, skipna=None, numeric_only=None) # median of the values
pd.DataFrame.median(self, axis=None, skipna=None, numeric_only=None) # median of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### MAD (mean absolute deviation)
```py
pd.Series.mad(self, skipna=None) # mean absolute deviation
pd.DataFrame.mad(self, axis=None, skipna=None) # mean absolute deviation of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
```
### VAR (variance)
```py
pd.Series.var(self, skipna=None, numeric_only=None) # unbiased variance
pd.DataFrame.var(self, axis=None, skipna=None, ddof=1, numeric_only=None) # unbiased variance over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
# DDOF: {int} -- Delta Degrees of Freedom. divisor used in calculations is N - ddof (N represents the number of elements) -- DEFAULT 1
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### STD (standard deviation)
```py
pd.Series.std(self, skipna=None, ddof=1, numeric_only=None) # sample standard deviation
pd.Dataframe.std(self, axis=None, skipna=None, ddof=1, numeric_only=None) # sample standard deviation over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
# DDOF: {int} -- Delta Degrees of Freedom. divisor used in calculations is N - ddof (N represents the number of elements) -- DEFAULT 1
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### SKEW
```py
pd.Series.skew(self, skipna=None, numeric_only=None) # unbiased skew Normalized bt N-1
pd.DataFrame.skew(self, axis=None, skipna=None, numeric_only=None) # unbiased skew over requested axis Normalized by N-1
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### KURT
Unbiased kurtosis over requested axis using Fishers definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
```py
pd.Series.kurt(self, skipna=None, numeric_only=None)
pd.Dataframe.kurt(self, axis=None, skipna=None, numeric_only=None)
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not immplemented for Series
```
### CUMSUM (cumulative sum)
```py
pd.Series.cumsum(self, skipna=True) # cumulative sum
pd.Dataframe.cumsum(self, axis=None, skipna=True) # cumulative sum over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
```
### CUMMAX - CUMMIN (cumulative maximum - minimum)
```py
pd.Series.cummax(self, skipna=True) # cumulative maximum
pd.Series.cummin(self, skipna=True) # cumulative minimumm
pd.Dataframe.cummax(self, axis=None, skipna=True) # cumulative maximum over requested axis
pd.Dataframe.cummin(self, axis=None, skipna=True) # cumulative minimum over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
```
### CUMPROD (cumulative product)
```py
pd.Series.cumprod(self, skipna=True) # cumulative product
pd.Dataframe.cumprod(self, axis=None, skipna=True) # cumulative product over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
```
### DIFF
Calculates the difference of a DataFrame element compared with another element in the DataFrame.
(default is the element in the same column of the previous row)
```py
pd.Series.diff(self, periods=1)
pd.DataFrame.diff(self, periods=1, axis=0)
# PERIODS: {int} -- Periods to shift for calculating difference, accepts negative values -- DEFAULT 1
# AXIS: {0, 1, index, columns} -- Take difference over rows or columns
```
### PCT_CAHNGE
Percentage change between the current and a prior element.
```py
pd.Series.Pct_change(self, periods=1, fill_method='pad', limit=None, freq=None)
pd.Dataframe.pct_change(self, periods=1, fill_method='pad', limit=None)
# PERIODS:{int} -- periods to shift for forming percent change
# FILL_METHOD: {str, pda} -- How to handle NAs before computing percent changes -- DEFAULT pad
# LIMIT: {int} -- number of consecutive NAs to fill before stopping -- DEFAULT None
```
## HANDLING MISSING DATA
### FILTERING OUT MISSING DATA
```py
pd.Series.dropna(self, inplace=False) # return a new Series with missing values removed
pd.DataFrame.dropna(axis=0, how='any', tresh=None, subset=None, inplace=False) # return a new DataFrame with missing values removed
# AXIS: {tuple, list} -- tuple or list to drop on multiple axes. only a single axis is allowed
# HOW: {any, all} -- determine if row or column is removed from DataFrame (ANY = if any NA present, ALL = if all values are NA). DEFAULT any
# TRESH: {int} -- require that many non-NA values
# SUBSET: {array} -- labels along other axis to consider
# INPLACE: {bool} -- if True, do operation inplace and return None -- DEFAULT False
```
### FILLING IN MISSING DATA
Fill NA/NaN values using the specified method.
```py
pd.Series.fillna(self, value=None, method=None, inplace=False, limit=None)
pd.DataFrame.fillna(self, value=None, method=None, axis=None, inplace=False, limit=None)
# VALUE: {scalar, dict, Series, DataFrame} -- value to use to fill holes, dict/Series/DataFrame specifying which value to use for each index or column
# METHOD: {backfill, pad, None} -- method to use for filling holes -- DEFAULT None
# AXIS: {0, 1, index, columns} -- axis along which to fill missing values
# INPLACE: {bool} -- if true fill in-place (will modify views of object) -- DEFAULT False
# LIMIT: {int} -- maximum number of consecutive NaN values to forward/backward fill -- DEFAULT None
```
## HIERARCHICAL INDEXING (MultiIndex)
Enables storing and manupulation of data with an arbitrary number of dimensions.
In lower dimensional data structures like Series (1d) and DataFrame (2d).
### MULTIIINDEX CREATION
```py
pd.MultiIndex.from_arrays(*arrays, names=None) # convert arrays to MultiIndex
pd.MultiIndex.from_tuples(*arrays, names=None) # convert tuples to MultiIndex
pd.MultiIndex.from_frame(df, names=None) # convert DataFrame to MultiIndex
pd.MultiIndex.from_product(*iterables, names=None) # MultiIndex from cartesian product of iterables
pd.Series(*arrays) # Index constructor makes MultiIndex from Series
pd.DataFrame(*arrays) # Index constructor makes MultiINdex from DataFrame
```
### MULTIINDEX LEVELS
Vector of label values for requested level, equal to the length of the index.
```py
pd.MultiIndex.get_level_values(self, level)
```
### PARTIAL AND CROSS-SECTION SELECTION
Partial selection “drops” levels of the hierarchical index in the result in a completely analogous way to selecting a column in a regular DataFrame.
```py
pd.Series.xs(self, key, axis=0, level=None, drop_level=True) # cross-section from Series
pd.DataFrame.xs(self, key, axis=0, level=None, drop_level=True) # cross-section from DataFrame
# KEY: {label, tuple of label} -- label contained in the index, or partially in a MultiIndex
# AXIS: {0, 1, index, columns} -- axis to retrieve cross-section on -- DEFAULT 0
# LEVEL: -- in case of key partially contained in MultiIndex, indicate which levels are used. Levels referred by label or position
# DROP_LEVEL: {bool} -- If False, returns object with same levels as self -- DEFAULT True
```
### INDEXING, SLICING
Multi index keys take the form of tuples.
```py
df.loc[('lvl_1', 'lvl_2', ...)] # selection of single row
df.loc[('idx_lvl_1', 'idx_lvl_2', ...), ('col_lvl_1', 'col_lvl_2', ...)] # selection of single value
df.loc['idx_lvl_1':'idx_lvl_1'] # slice of rows (aka partial selection)
df.loc[('idx_lvl_1', 'idx_lvl_2') : ('idx_lvl_1', 'idx_lvl_2')] # slice of rows with levels
```
### REORDERING AND SORTING LEVELS
```py
pd.MultiIndex.swaplevel(self, i=-2, j=-1) # swap level i with level j
pd.Series.swaplevel(self, i=-2, j=-1) # swap levels i and j in a MultiIndex
pd.DataFrame.swaplevel(self, i=-2, j=-1, axis=0) # swap levels i and j in a MultiIndex on a partivular axis
pd.MultiIndex.sortlevel(self, level=0, ascending=True, sort_remaining=True) # sort MultiIndex at requested level
# LEVEL: {str, int, list-like} -- DEFAULT 0
# ASCENDING: {bool} -- if True, sort values in ascending order, otherwise descending -- DEFAULT True
# SORT_REMAINING: {bool} -- sort by the remaining levels after level
```
## DATA LOADING, STORAGE FILE FORMATS
```py
pd.read_fwf(filepath, colspecs='infer', widths=None, infer_nrows=100) # read a table of fixed-width formatted lines into DataFrame
# FILEPATH: {str, path object} -- any valid string path is acceptable, could be a URL. Valid URLs: http, ftp, s3, and file
# COLSPECS: {list of tuple (int, int), 'infer'} -- list of tuples giving extents of fixed-width fields of each line as half-open intervals { [from, to) }
# WIDTHS: {list of int} -- list of field widths which can be used instead of colspecs if intervals are contiguous
# INFER_ROWS: {int} -- number of rows to consider when letting parser determine colspecs -- DEFAULT 100
pd.read_excel() # read an Excel file into a pandas DataFrame
pd.read_json() # convert a JSON string to pandas object
pd.read_html() # read HTML tables into a list of DataFrame objects
pd.read_sql() # read SQL query or database table into a DataFrame
pd.read_csv(filepath, sep=',', *args, **kwargs ) # read a comma-separated values (csv) file into DataFrame
pd.read_table(filepath, sep='\t', *args, **kwargs) # read general delimited file into DataFrame
# FILEPATH: {str, path object} -- any valid string path is acceptable, could be a URL. Valid URLs: http, ftp, s3, and file
# SEP: {str} -- delimiter to use -- DEFAULT \t (tab)
# HEADER {int, list of int, 'infer'} -- row numbers to use as column names, and the start of the data -- DEFAULT 'infer'
# NAMES:{array} -- list of column names to use -- DEFAULT None
# INDEX_COL: {int, str, False, sequnce of int/str, None} -- Columns to use as row labels of DataFrame, given as string name or column index -- DEFAULT None
# SKIPROWS: {list-like, int, callable} -- Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file
# NA_VALUES: {scalar, str, list-like, dict} -- additional strings to recognize as NA/NaN. if dict passed, specific per-column NA values
# THOUSANDS: {str} -- thousand separator
# *ARGS, **KWARGS -- SEE DOCS
# write object to a comma-separated values (csv) file
pd.DataFrame.to_csv(self, path_or_buf, sep=',', na_rep='', columns=None, header=True, index=True, encoding='utf-8', line_terminator=None, decimal='.', *args, **kwargs)
# SEP: {str len 1} -- Field delimiter for the output file
# NA_REP: {str} -- missing data representation
# COLUMNS: {sequence} -- colums to write
# HEADER: {bool, list of str} -- write out column names. if list of strings is given its assumed to be aliases for column names
# INDEX: {bool, list of str} -- write out row names (index)
# ENCODING: {str} -- string representing encoding to use -- DEFAULT utf-8
# LINE_TERMINATOR: {str} -- newline character or character sequence to use in the output file -- DEFAULT os.linesep
# DECIMAL: {str} -- character recognized as decimal separator (in EU ,)
pd.DataFrame.to_excel()
pd.DataFrame.to_json()
pd.DataFrame.to_html()
pd.DataFrame.to_sql()
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 769 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 549 KiB

View file

@ -0,0 +1,589 @@
# MATPLOTLIBRC FORMAT
```py
# This is a sample matplotlib configuration file - you can find a copy
# of it on your system in
# site-packages/matplotlib/mpl-data/matplotlibrc. If you edit it
# there, please note that it will be overwritten in your next install.
# If you want to keep a permanent local copy that will not be
# overwritten, place it in the following location:
# unix/linux:
# $HOME/.config/matplotlib/matplotlibrc or
# $XDG_CONFIG_HOME/matplotlib/matplotlibrc (if $XDG_CONFIG_HOME is set)
# other platforms:
# $HOME/.matplotlib/matplotlibrc
#
# See http://matplotlib.org/users/customizing.html#the-matplotlibrc-file for
# more details on the paths which are checked for the configuration file.
#
# This file is best viewed in a editor which supports python mode
# syntax highlighting. Blank lines, or lines starting with a comment
# symbol, are ignored, as are trailing comments. Other lines must
# have the format
# key : val # optional comment
#
# Colors: for the color values below, you can either use - a
# matplotlib color string, such as r, k, or b - an rgb tuple, such as
# (1.0, 0.5, 0.0) - a hex string, such as ff00ff - a scalar
# grayscale intensity such as 0.75 - a legal html color name, e.g., red,
# blue, darkslategray
#### CONFIGURATION BEGINS HERE
# The default backend; one of GTK GTKAgg GTKCairo GTK3Agg GTK3Cairo
# MacOSX Qt4Agg Qt5Agg TkAgg WX WXAgg Agg Cairo GDK PS PDF SVG
# Template.
# You can also deploy your own backend outside of matplotlib by
# referring to the module name (which must be in the PYTHONPATH) as
# 'module://my_backend'.
#
# If you omit this parameter, it will always default to "Agg", which is a
# non-interactive backend.
# backend : qt5agg
# Note that this can be overridden by the environment variable
# QT_API used by Enthought Tool Suite (ETS); valid values are
# "pyqt" and "pyside". The "pyqt" setting has the side effect of
# forcing the use of Version 2 API for QString and QVariant.
# The port to use for the web server in the WebAgg backend.
# webagg.port : 8888
# The address on which the WebAgg web server should be reachable
# webagg.address : 127.0.0.1
# If webagg.port is unavailable, a number of other random ports will
# be tried until one that is available is found.
# webagg.port_retries : 50
# When True, open the webbrowser to the plot that is shown
# webagg.open_in_browser : True
# if you are running pyplot inside a GUI and your backend choice
# conflicts, we will automatically try to find a compatible one for
# you if backend_fallback is True
#backend_fallback: True
#interactive : False
#toolbar : toolbar2 # None | toolbar2 ("classic" is deprecated)
#timezone : UTC # a pytz timezone string, e.g., US/Central or Europe/Paris
# Where your matplotlib data lives if you installed to a non-default
# location. This is where the matplotlib fonts, bitmaps, etc reside
#datapath : /home/jdhunter/mpldata
### LINES
# See http://matplotlib.org/api/artist_api.html#module-matplotlib.lines for more
# information on line properties.
#lines.linewidth : 1.5 # line width in points
#lines.linestyle : - # solid line
#lines.color : C0 # has no affect on plot(); see axes.prop_cycle
#lines.marker : None # the default marker
#lines.markeredgewidth : 1.0 # the line width around the marker symbol
#lines.markersize : 6 # markersize, in points
#lines.dash_joinstyle : miter # miter|round|bevel
#lines.dash_capstyle : butt # butt|round|projecting
#lines.solid_joinstyle : miter # miter|round|bevel
#lines.solid_capstyle : projecting # butt|round|projecting
#lines.antialiased : True # render lines in antialiased (no jaggies)
# The three standard dash patterns. These are scaled by the linewidth.
#lines.dashed_pattern : 2.8, 1.2
#lines.dashdot_pattern : 4.8, 1.2, 0.8, 1.2
#lines.dotted_pattern : 1.1, 1.1
#lines.scale_dashes : True
#markers.fillstyle: full # full|left|right|bottom|top|none
### PATCHES
# Patches are graphical objects that fill 2D space, like polygons or
# circles. See
# http://matplotlib.org/api/artist_api.html#module-matplotlib.patches
# information on patch properties
#patch.linewidth : 1 # edge width in points.
#patch.facecolor : C0
#patch.edgecolor : black # if forced, or patch is not filled
#patch.force_edgecolor : False # True to always use edgecolor
#patch.antialiased : True # render patches in antialiased (no jaggies)
### HATCHES
#hatch.color : k
#hatch.linewidth : 1.0
### Boxplot
#boxplot.notch : False
#boxplot.vertical : True
#boxplot.whiskers : 1.5
#boxplot.bootstrap : None
#boxplot.patchartist : False
#boxplot.showmeans : False
#boxplot.showcaps : True
#boxplot.showbox : True
#boxplot.showfliers : True
#boxplot.meanline : False
#boxplot.flierprops.color : 'k'
#boxplot.flierprops.marker : 'o'
#boxplot.flierprops.markerfacecolor : 'none'
#boxplot.flierprops.markeredgecolor : 'k'
#boxplot.flierprops.markersize : 6
#boxplot.flierprops.linestyle : 'none'
#boxplot.flierprops.linewidth : 1.0
#boxplot.boxprops.color : 'k'
#boxplot.boxprops.linewidth : 1.0
#boxplot.boxprops.linestyle : '-'
#boxplot.whiskerprops.color : 'k'
#boxplot.whiskerprops.linewidth : 1.0
#boxplot.whiskerprops.linestyle : '-'
#boxplot.capprops.color : 'k'
#boxplot.capprops.linewidth : 1.0
#boxplot.capprops.linestyle : '-'
#boxplot.medianprops.color : 'C1'
#boxplot.medianprops.linewidth : 1.0
#boxplot.medianprops.linestyle : '-'
#boxplot.meanprops.color : 'C2'
#boxplot.meanprops.marker : '^'
#boxplot.meanprops.markerfacecolor : 'C2'
#boxplot.meanprops.markeredgecolor : 'C2'
#boxplot.meanprops.markersize : 6
#boxplot.meanprops.linestyle : 'none'
#boxplot.meanprops.linewidth : 1.0
### FONT
#
# font properties used by text.Text. See
# http://matplotlib.org/api/font_manager_api.html for more
# information on font properties. The 6 font properties used for font
# matching are given below with their default values.
#
# The font.family property has five values: 'serif' (e.g., Times),
# 'sans-serif' (e.g., Helvetica), 'cursive' (e.g., Zapf-Chancery),
# 'fantasy' (e.g., Western), and 'monospace' (e.g., Courier). Each of
# these font families has a default list of font names in decreasing
# order of priority associated with them. When text.usetex is False,
# font.family may also be one or more concrete font names.
#
# The font.style property has three values: normal (or roman), italic
# or oblique. The oblique style will be used for italic, if it is not
# present.
#
# The font.variant property has two values: normal or small-caps. For
# TrueType fonts, which are scalable fonts, small-caps is equivalent
# to using a font size of 'smaller', or about 83%% of the current font
# size.
#
# The font.weight property has effectively 13 values: normal, bold,
# bolder, lighter, 100, 200, 300, ..., 900. Normal is the same as
# 400, and bold is 700. bolder and lighter are relative values with
# respect to the current weight.
#
# The font.stretch property has 11 values: ultra-condensed,
# extra-condensed, condensed, semi-condensed, normal, semi-expanded,
# expanded, extra-expanded, ultra-expanded, wider, and narrower. This
# property is not currently implemented.
#
# The font.size property is the default font size for text, given in pts.
# 10 pt is the standard value.
#
#font.family : sans-serif
#font.style : normal
#font.variant : normal
#font.weight : medium
#font.stretch : normal
# note that font.size controls default text sizes. To configure
# special text sizes tick labels, axes, labels, title, etc, see the rc
# settings for axes and ticks. Special text sizes can be defined
# relative to font.size, using the following values: xx-small, x-small,
# small, medium, large, x-large, xx-large, larger, or smaller
#font.size : 10.0
#font.serif : DejaVu Serif, Bitstream Vera Serif, New Century Schoolbook, Century Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times New Roman, Times, Palatino, Charter, serif
#font.sans-serif : DejaVu Sans, Bitstream Vera Sans, Lucida Grande, Verdana, Geneva, Lucid, Arial, Helvetica, Avant Garde, sans-serif
#font.cursive : Apple Chancery, Textile, Zapf Chancery, Sand, Script MT, Felipa, cursive
#font.fantasy : Comic Sans MS, Chicago, Charcoal, Impact, Western, Humor Sans, xkcd, fantasy
#font.monospace : DejaVu Sans Mono, Bitstream Vera Sans Mono, Andale Mono, Nimbus Mono L, Courier New, Courier, Fixed, Terminal, monospace
### TEXT
# text properties used by text.Text. See
# http://matplotlib.org/api/artist_api.html#module-matplotlib.text for more
# information on text properties
#text.color : black
### LaTeX customizations. See http://wiki.scipy.org/Cookbook/Matplotlib/UsingTex
#text.usetex : False # use latex for all text handling. The following fonts
# are supported through the usual rc parameter settings:
# new century schoolbook, bookman, times, palatino,
# zapf chancery, charter, serif, sans-serif, helvetica,
# avant garde, courier, monospace, computer modern roman,
# computer modern sans serif, computer modern typewriter
# If another font is desired which can loaded using the
# LaTeX \usepackage command, please inquire at the
# matplotlib mailing list
#text.latex.unicode : False # use "ucs" and "inputenc" LaTeX packages for handling
# unicode strings.
#text.latex.preamble : # IMPROPER USE OF THIS FEATURE WILL LEAD TO LATEX FAILURES
# AND IS THEREFORE UNSUPPORTED. PLEASE DO NOT ASK FOR HELP
# IF THIS FEATURE DOES NOT DO WHAT YOU EXPECT IT TO.
# preamble is a comma separated list of LaTeX statements
# that are included in the LaTeX document preamble.
# An example:
# text.latex.preamble : \usepackage{bm},\usepackage{euler}
# The following packages are always loaded with usetex, so
# beware of package collisions: color, geometry, graphicx,
# type1cm, textcomp. Adobe Postscript (PSSNFS) font packages
# may also be loaded, depending on your font settings
#text.hinting : auto # May be one of the following:
# 'none': Perform no hinting
# 'auto': Use FreeType's autohinter
# 'native': Use the hinting information in the
# font file, if available, and if your
# FreeType library supports it
# 'either': Use the native hinting information,
# or the autohinter if none is available.
# For backward compatibility, this value may also be
# True === 'auto' or False === 'none'.
#text.hinting_factor : 8 # Specifies the amount of softness for hinting in the
# horizontal direction. A value of 1 will hint to full
# pixels. A value of 2 will hint to half pixels etc.
#text.antialiased : True # If True (default), the text will be antialiased.
# This only affects the Agg backend.
# The following settings allow you to select the fonts in math mode.
# They map from a TeX font name to a fontconfig font pattern.
# These settings are only used if mathtext.fontset is 'custom'.
# Note that this "custom" mode is unsupported and may go away in the
# future.
#mathtext.cal : cursive
#mathtext.rm : serif
#mathtext.tt : monospace
#mathtext.it : serif:italic
#mathtext.bf : serif:bold
#mathtext.sf : sans
#mathtext.fontset : dejavusans # Should be 'dejavusans' (default),
# 'dejavuserif', 'cm' (Computer Modern), 'stix',
# 'stixsans' or 'custom'
#mathtext.fallback_to_cm : True # When True, use symbols from the Computer Modern
# fonts when a symbol can not be found in one of
# the custom math fonts.
#mathtext.default : it # The default font to use for math.
# Can be any of the LaTeX font names, including
# the special name "regular" for the same font
# used in regular text.
### AXES
# default face and edge color, default tick sizes,
# default fontsizes for ticklabels, and so on. See
# http://matplotlib.org/api/axes_api.html#module-matplotlib.axes
#axes.facecolor : white # axes background color
#axes.edgecolor : black # axes edge color
#axes.linewidth : 0.8 # edge linewidth
#axes.grid : False # display grid or not
#axes.titlesize : large # fontsize of the axes title
#axes.titlepad : 6.0 # pad between axes and title in points
#axes.labelsize : medium # fontsize of the x any y labels
#axes.labelpad : 4.0 # space between label and axis
#axes.labelweight : normal # weight of the x and y labels
#axes.labelcolor : black
#axes.axisbelow : 'line' # draw axis gridlines and ticks below
# patches (True); above patches but below
# lines ('line'); or above all (False)
#axes.formatter.limits : -7, 7 # use scientific notation if log10
# of the axis range is smaller than the
# first or larger than the second
#axes.formatter.use_locale : False # When True, format tick labels
# according to the user's locale.
# For example, use ',' as a decimal
# separator in the fr_FR locale.
#axes.formatter.use_mathtext : False # When True, use mathtext for scientific
# notation.
#axes.formatter.min_exponent: 0 # minimum exponent to format in scientific notation
#axes.formatter.useoffset : True # If True, the tick label formatter
# will default to labeling ticks relative
# to an offset when the data range is
# small compared to the minimum absolute
# value of the data.
#axes.formatter.offset_threshold : 4 # When useoffset is True, the offset
# will be used when it can remove
# at least this number of significant
# digits from tick labels.
# axes.spines.left : True # display axis spines
# axes.spines.bottom : True
# axes.spines.top : True
# axes.spines.right : True
#axes.unicode_minus : True # use unicode for the minus symbol
# rather than hyphen. See
# http://en.wikipedia.org/wiki/Plus_and_minus_signs#Character_codes
# axes.prop_cycle : cycler('color', ['1f77b4', 'ff7f0e', '2ca02c', 'd62728', '9467bd', '8c564b', 'e377c2', '7f7f7f', 'bcbd22', '17becf'])
# color cycle for plot lines as list of string
# colorspecs: single letter, long name, or web-style hex
#axes.autolimit_mode : data # How to scale axes limits to the data.
# Use "data" to use data limits, plus some margin
# Use "round_number" move to the nearest "round" number
#axes.xmargin : .05 # x margin. See `axes.Axes.margins`
#axes.ymargin : .05 # y margin See `axes.Axes.margins`
#polaraxes.grid : True # display grid on polar axes
#axes3d.grid : True # display grid on 3d axes
### DATES
# These control the default format strings used in AutoDateFormatter.
# Any valid format datetime format string can be used (see the python
# `datetime` for details). For example using '%%x' will use the locale date representation
# '%%X' will use the locale time representation and '%%c' will use the full locale datetime
# representation.
# These values map to the scales:
# {'year': 365, 'month': 30, 'day': 1, 'hour': 1/24, 'minute': 1 / (24 * 60)}
# date.autoformatter.year : %Y
# date.autoformatter.month : %Y-%m
# date.autoformatter.day : %Y-%m-%d
# date.autoformatter.hour : %m-%d %H
# date.autoformatter.minute : %d %H:%M
# date.autoformatter.second : %H:%M:%S
# date.autoformatter.microsecond : %M:%S.%f
### TICKS
# see http://matplotlib.org/api/axis_api.html#matplotlib.axis.Tick
#xtick.top : False # draw ticks on the top side
#xtick.bottom : True # draw ticks on the bottom side
#xtick.major.size : 3.5 # major tick size in points
#xtick.minor.size : 2 # minor tick size in points
#xtick.major.width : 0.8 # major tick width in points
#xtick.minor.width : 0.6 # minor tick width in points
#xtick.major.pad : 3.5 # distance to major tick label in points
#xtick.minor.pad : 3.4 # distance to the minor tick label in points
#xtick.color : k # color of the tick labels
#xtick.labelsize : medium # fontsize of the tick labels
#xtick.direction : out # direction: in, out, or inout
#xtick.minor.visible : False # visibility of minor ticks on x-axis
#xtick.major.top : True # draw x axis top major ticks
#xtick.major.bottom : True # draw x axis bottom major ticks
#xtick.minor.top : True # draw x axis top minor ticks
#xtick.minor.bottom : True # draw x axis bottom minor ticks
#ytick.left : True # draw ticks on the left side
#ytick.right : False # draw ticks on the right side
#ytick.major.size : 3.5 # major tick size in points
#ytick.minor.size : 2 # minor tick size in points
#ytick.major.width : 0.8 # major tick width in points
#ytick.minor.width : 0.6 # minor tick width in points
#ytick.major.pad : 3.5 # distance to major tick label in points
#ytick.minor.pad : 3.4 # distance to the minor tick label in points
#ytick.color : k # color of the tick labels
#ytick.labelsize : medium # fontsize of the tick labels
#ytick.direction : out # direction: in, out, or inout
#ytick.minor.visible : False # visibility of minor ticks on y-axis
#ytick.major.left : True # draw y axis left major ticks
#ytick.major.right : True # draw y axis right major ticks
#ytick.minor.left : True # draw y axis left minor ticks
#ytick.minor.right : True # draw y axis right minor ticks
### GRIDS
#grid.color : b0b0b0 # grid color
#grid.linestyle : - # solid
#grid.linewidth : 0.8 # in points
#grid.alpha : 1.0 # transparency, between 0.0 and 1.0
### Legend
#legend.loc : best
#legend.frameon : True # if True, draw the legend on a background patch
#legend.framealpha : 0.8 # legend patch transparency
#legend.facecolor : inherit # inherit from axes.facecolor; or color spec
#legend.edgecolor : 0.8 # background patch boundary color
#legend.fancybox : True # if True, use a rounded box for the
# legend background, else a rectangle
#legend.shadow : False # if True, give background a shadow effect
#legend.numpoints : 1 # the number of marker points in the legend line
#legend.scatterpoints : 1 # number of scatter points
#legend.markerscale : 1.0 # the relative size of legend markers vs. original
#legend.fontsize : medium
# Dimensions as fraction of fontsize:
#legend.borderpad : 0.4 # border whitespace
#legend.labelspacing : 0.5 # the vertical space between the legend entries
#legend.handlelength : 2.0 # the length of the legend lines
#legend.handleheight : 0.7 # the height of the legend handle
#legend.handletextpad : 0.8 # the space between the legend line and legend text
#legend.borderaxespad : 0.5 # the border between the axes and legend edge
#legend.columnspacing : 2.0 # column separation
### FIGURE
# See http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure
#figure.titlesize : large # size of the figure title (Figure.suptitle())
#figure.titleweight : normal # weight of the figure title
#figure.figsize : 6.4, 4.8 # figure size in inches
#figure.dpi : 100 # figure dots per inch
#figure.facecolor : white # figure facecolor; 0.75 is scalar gray
#figure.edgecolor : white # figure edgecolor
#figure.autolayout : False # When True, automatically adjust subplot
# parameters to make the plot fit the figure
# using `tight_layout`
#figure.constrained_layout.use: False # When True, automatically make plot
# elements fit on the figure. (Not compatible
# with `autolayout`, above).
#figure.max_open_warning : 20 # The maximum number of figures to open through
# the pyplot interface before emitting a warning.
# If less than one this feature is disabled.
# The figure subplot parameters. All dimensions are a fraction of the
#figure.subplot.left : 0.125 # the left side of the subplots of the figure
#figure.subplot.right : 0.9 # the right side of the subplots of the figure
#figure.subplot.bottom : 0.11 # the bottom of the subplots of the figure
#figure.subplot.top : 0.88 # the top of the subplots of the figure
#figure.subplot.wspace : 0.2 # the amount of width reserved for space between subplots,
# expressed as a fraction of the average axis width
#figure.subplot.hspace : 0.2 # the amount of height reserved for space between subplots,
# expressed as a fraction of the average axis height
### IMAGES
#image.aspect : equal # equal | auto | a number
#image.interpolation : nearest # see help(imshow) for options
#image.cmap : viridis # A colormap name, gray etc...
#image.lut : 256 # the size of the colormap lookup table
#image.origin : upper # lower | upper
#image.resample : True
#image.composite_image : True # When True, all the images on a set of axes are
# combined into a single composite image before
# saving a figure as a vector graphics file,
# such as a PDF.
### CONTOUR PLOTS
#contour.negative_linestyle : dashed # string or on-off ink sequence
#contour.corner_mask : True # True | False | legacy
### ERRORBAR PLOTS
#errorbar.capsize : 0 # length of end cap on error bars in pixels
### HISTOGRAM PLOTS
#hist.bins : 10 # The default number of histogram bins.
# If Numpy 1.11 or later is
# installed, may also be `auto`
### SCATTER PLOTS
#scatter.marker : o # The default marker type for scatter plots.
### Agg rendering
### Warning: experimental, 2008/10/10
#agg.path.chunksize : 0 # 0 to disable; values in the range
# 10000 to 100000 can improve speed slightly
# and prevent an Agg rendering failure
# when plotting very large data sets,
# especially if they are very gappy.
# It may cause minor artifacts, though.
# A value of 20000 is probably a good
# starting point.
### SAVING FIGURES
#path.simplify : True # When True, simplify paths by removing "invisible"
# points to reduce file size and increase rendering
# speed
#path.simplify_threshold : 0.1 # The threshold of similarity below which
# vertices will be removed in the simplification
# process
#path.snap : True # When True, rectilinear axis-aligned paths will be snapped to
# the nearest pixel when certain criteria are met. When False,
# paths will never be snapped.
#path.sketch : None # May be none, or a 3-tuple of the form (scale, length,
# randomness).
# *scale* is the amplitude of the wiggle
# perpendicular to the line (in pixels). *length*
# is the length of the wiggle along the line (in
# pixels). *randomness* is the factor by which
# the length is randomly scaled.
# the default savefig params can be different from the display params
# e.g., you may want a higher resolution, or to make the figure
# background white
#savefig.dpi : figure # figure dots per inch or 'figure'
#savefig.facecolor : white # figure facecolor when saving
#savefig.edgecolor : white # figure edgecolor when saving
#savefig.format : png # png, ps, pdf, svg
#savefig.bbox : standard # 'tight' or 'standard'.
# 'tight' is incompatible with pipe-based animation
# backends but will workd with temporary file based ones:
# e.g. setting animation.writer to ffmpeg will not work,
# use ffmpeg_file instead
#savefig.pad_inches : 0.1 # Padding to be used when bbox is set to 'tight'
#savefig.jpeg_quality: 95 # when a jpeg is saved, the default quality parameter.
#savefig.directory : ~ # default directory in savefig dialog box,
# leave empty to always use current working directory
#savefig.transparent : False # setting that controls whether figures are saved with a
# transparent background by default
# tk backend params
#tk.window_focus : False # Maintain shell focus for TkAgg
# ps backend params
#ps.papersize : letter # auto, letter, legal, ledger, A0-A10, B0-B10
#ps.useafm : False # use of afm fonts, results in small files
#ps.usedistiller : False # can be: None, ghostscript or xpdf
# Experimental: may produce smaller files.
# xpdf intended for production of publication quality files,
# but requires ghostscript, xpdf and ps2eps
#ps.distiller.res : 6000 # dpi
#ps.fonttype : 3 # Output Type 3 (Type3) or Type 42 (TrueType)
# pdf backend params
#pdf.compression : 6 # integer from 0 to 9
# 0 disables compression (good for debugging)
#pdf.fonttype : 3 # Output Type 3 (Type3) or Type 42 (TrueType)
# svg backend params
#svg.image_inline : True # write raster image data directly into the svg file
#svg.fonttype : 'path' # How to handle SVG fonts:
# 'none': Assume fonts are installed on the machine where the SVG will be viewed.
# 'path': Embed characters as paths -- supported by most SVG renderers
# 'svgfont': Embed characters as SVG fonts -- supported only by Chrome,
# Opera and Safari
#svg.hashsalt : None # if not None, use this string as hash salt
# instead of uuid4
# docstring params
#docstring.hardcopy = False # set this when you want to generate hardcopy docstring
# Event keys to interact with figures/plots via keyboard.
# Customize these settings according to your needs.
# Leave the field(s) empty if you don't need a key-map. (i.e., fullscreen : '')
#keymap.fullscreen : f, ctrl+f # toggling
#keymap.home : h, r, home # home or reset mnemonic
#keymap.back : left, c, backspace # forward / backward keys to enable
#keymap.forward : right, v # left handed quick navigation
#keymap.pan : p # pan mnemonic
#keymap.zoom : o # zoom mnemonic
#keymap.save : s # saving current figure
#keymap.quit : ctrl+w, cmd+w # close the current figure
#keymap.grid : g # switching on/off major grids in current axes
#keymap.grid_minor : G # switching on/off minor grids in current axes
#keymap.yscale : l # toggle scaling of y-axes ('log'/'linear')
#keymap.xscale : L, k # toggle scaling of x-axes ('log'/'linear')
#keymap.all_axes : a # enable all axes
# Control location of examples data files
#examples.directory : '' # directory to look in for custom installation
###ANIMATION settings
#animation.html : 'none' # How to display the animation as HTML in
# the IPython notebook. 'html5' uses
# HTML5 video tag.
#animation.writer : ffmpeg # MovieWriter 'backend' to use
#animation.codec : h264 # Codec to use for writing movie
#animation.bitrate: -1 # Controls size/quality tradeoff for movie.
# -1 implies let utility auto-determine
#animation.frame_format: 'png' # Controls frame format used by temp files
#animation.html_args: '' # Additional arguments to pass to html writer
#animation.ffmpeg_path: 'ffmpeg' # Path to ffmpeg binary. Without full path
# $PATH is searched
#animation.ffmpeg_args: '' # Additional arguments to pass to ffmpeg
#animation.avconv_path: 'avconv' # Path to avconv binary. Without full path
# $PATH is searched
#animation.avconv_args: '' # Additional arguments to pass to avconv
#animation.convert_path: 'convert' # Path to ImageMagick's convert binary.
# On Windows use the full path since convert
# is also the name of a system tool.
#animation.convert_args: '' # Additional arguments to pass to convert
```

218
Python/Libs/Math/seaborn.md Normal file
View file

@ -0,0 +1,218 @@
# Seaborn Lib
## Basic Imports For Seaborn
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# set aesthetic parameters in one step
sns.set(style='darkgrid')
#STYLE: {None, darkgrid, whitegrid, dark, white, ticks}
```
## REPLOT (relationship)
```python
sns.replot(x='name_in_data', y='name_in_data', hue='point_color', size='point_size', style='point_shape', data=data)
# HUE, SIZE and STYLE: {name in data} -- used to differenciate points, a sort-of 3rd dimention
# hue behaves differently if the data is categorical or numerical, numerical uses a color gradient
# SORT: {False, True} -- avoid sorting data in function of x
# CI: {None, sd} -- avoid comuting confidence intervals or plot standard deviation
# (aggregate multiple measurements at each x value by plotting the mean and the 95% confidence interval around the mean)
# ESTIMATOR: {None} -- turn off aggregation of multiple observations
# MARKERS: {True, False} -- evidetiate observations with dots
# DASHES: {True, False} -- evidetiate observations with dashes
# COL, ROW: {name in data} -- categorical variables that will determine the grid of plots
# COL_WRAP: {int} -- “Wrap” the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet.
# SCATTERPLOT
# depicts the joint distibution of two variables usinga a cloud of points
# kind can be omitted since scatterplot is the default for replot
sns.replot(kind='scatter') # calls scatterplot()
sns.scatterplot() # underlying axis-level function of replot()
```
### LINEPLOT
Using semantics in lineplot will determine the aggregation of data.
```python
sns.replot(ci=None, sort=bool, kind='line')
sns.lineplot() # underlying axis-level function of replot()
```
## CATPLOT (categorical)
Categorical: dicided into discrete groups.
```python
sns.catplot(x='name_in_data', y='name_in_data', data=data)
# HUE: {name in data} -- used to differenciate points, a sort-of 3rd dimention
# COL, ROW: {name in data} -- categorical variables that will determine the grid of plots
# COL_WRAP: {int} -- “Wrap” the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet.
# ORDER, HUE_ORDER: {list of strings} -- oreder of categorical levels of the plot
# ROW_ORDER, COL_ORDER: {list of strings} -- order to organize the rows and/or columns of the grid in
# ORIENT: {'v', 'h'} -- Orientation of the plot (can also swap x&y assignement)
# COLOR: {matplotlib color} -- Color for all of the elements, or seed for a gradient palette
# CATEGORICAL SCATTERPLOT - STRIPPLOT
# adjust the positions of points on the categorical axis with a small amount of random “jitter”
sns.catplot(kind='strip', jitter=float)
sns.stripplot()
# SIZE: {float} -- Diameter of the markers, in points
# JITTER: {False, float} -- magnitude of points jitter (distance from axis)
```
### CATEGORICAL SCATTERPLOT - SWARMPLOT
Adjusts the points along the categorical axis preventing overlap.
```py
sns.catplot(kind='swarm')
sns.swarmplot()
# SIZE: {float} -- Diameter of the markers, in points
# CATEGORICAL DISTRIBUTION - BOXPLOT
# shows the three quartile values of the distribution along with extreme values
sns.catplot(kind='box')
sns.boxplot()
# HUE: {name in data} -- box for each level of the semantic moved along the categorical axis so they dont overlap
# DODGE: {bool} -- whether elements should be shifted along the categorical axis if hue is used
```
### CATEGORICAL DISTRIBUTION - VIOLINPLOT
Combines a boxplot with the kernel density estimation procedure.
```py
sns.catplot(kind='violon')
sns.violonplot()
```
### CATEGORICAL DISTRIBUTION - BOXENPLOT
Plot similar to boxplot but optimized for showing more information about the shape of the distribution.
It is best suited for larger datasets.
```py
sns.catplot(kind='boxen')
sns.boxenplot()
```
### CATEGORICAL ESTIMATE - POINTPLOT
Show point estimates and confidence intervals using scatter plot glyphs.
```py
sns.catplot(kind='point')
sns.pointplot()
# CI: {float, sd} -- size of confidence intervals to draw around estimated values, sd -> standard deviation
# MARKERS: {string, list of strings} -- markers to use for each of the hue levels
# LINESTYLES: {string, list of strings} -- line styles to use for each of the hue levels
# DODGE: {bool, float} -- amount to separate the points for each hue level along the categorical axis
# JOIN: {bool} -- if True, lines will be drawn between point estimates at the same hue level
# SCALE: {float} -- scale factor for the plot elements
# ERRWIDTH: {float} -- thickness of error bar lines (and caps)
# CAPSIZE: {float} -- width of the “caps” on error bars
```
### CATEGORICAL ESTIMATE - BARPLOT
Show point estimates and confidence intervals as rectangular bars.
```py
sns.catplot(kind='bar')
sns.barplot()
# CI: {float, sd} -- size of confidence intervals to draw around estimated values, sd -> standard deviation
# ERRCOLOR: {matplotlib color} -- color for the lines that represent the confidence interval
# ERRWIDTH: {float} -- thickness of error bar lines (and caps)
# CAPSIZE: {float} -- width of the “caps” on error bars
# DODGE: {bool} -- whether elements should be shifted along the categorical axis if hue is used
```
### CATEGORICAL ESTIMATE - COUNTPLOT
Show the counts of observations in each categorical bin using bars.
```py
sns.catplot(kind='count')
sns.countplot()
# DODGE: {bool} -- whether elements should be shifted along the categorical axis if hue is used
```
## UNIVARIATE DISTRIBUTIONS
### DISTPLOT
Flexibly plot a univariate distribution of observations
```py
# A: {series, 1d-array, list}
sns.distplot(a=data)
# BINS: {None, arg for matplotlib hist()} -- specification of hist bins, or None to use Freedman-Diaconis rule
# HIST: {bool} - whether to plot a (normed) histogram
# KDE: {bool} - whether to plot a gaussian kernel density estimate
# HIST_KWD, KDE_KWD, RUG_KWD: {dict} -- keyword arguments for underlying plotting functions
# COLOR: {matplotlib color} -- color to plot everything but the fitted curve in
```
### RUGPLOT
Plot datapoints in an array as sticks on an axis.
```py
# A: {vector} -- 1D array of observations
sns.rugplot(a=data) # -> axes obj with plot on it
# HEIGHT: {scalar} -- height of ticks as proportion of the axis
# AXIS: {'x', 'y'} -- axis to draw rugplot on
# AX: {matplotlib axes} -- axes to draw plot into, otherwise grabs current axes
```
### KDEPLOT
Fit and plot a univariate or bivariate kernel density estimate.
```py
# DATA: {1D array-like} -- inpoy data
sns.kdeplot(data=data)
# DATA2 {1D array-like} -- second input data. if present, a bivariate KDE will be estimated.
# SHADE: {bool} -- if True, shade-in the area under KDE curve (or draw with filled contours is bivariate)
```
## BIVARIATE DISTRIBUTION
### JOINTPLOT
Draw a plot of two variables with bivariate and univariate graphs.
```py
# X, Y: {string, vector} -- data or names of variables in data
sns.jointplot(x=data, y=data)
# DATA:{pandas DataFrame} -- DataFrame when x and y are variable names
# KIND: {'scatter', 'reg', 'resid', 'kde', 'hex'} -- kind of plot to draw
# COLOR: {matplotlib color} -- color used for plot elements
# HEIGHT: {numeric} -- size of figure (it will be square)
# RATIO: {numeric} -- ratio of joint axes height to marginal axes height
# SPACE: {numeric} -- space between the joint and marginal axes
# JOINT_KWD, MARGINAL_KWD, ANNOT_KWD: {dict} -- additional keyword arguments for the plot components
```
## PAIR-WISE RELATIONISPS IN DATASET
### PAIRPLOT
Plot pairwise relationships in a dataset.
```py
# DATA: {pandas DataFrame} -- tidy (long-form) dataframe where each column is a variable and each row is an observation
sns.pairplot(data=pd.DataFrame)
# HUE: {string (variable name)} -- variable in data to map plot aspects to different colors
# HUE_ORDER: {list of strings} -- order for the levels of the hue variable in the palette
# VARS: {list of variable names} -- variables within data to use, otherwise every column with numeric datatype
# X_VARS, Y_VARS: {list of variable names} -- variables within data to use separately for rows and columns of figure
# KIND: {'scatter', 'reg'} -- kind of plot for the non-identity relationships
# DIAG_KIND: {'auto', 'hist', 'kde'} -- Kind of plot for the diagonal subplots. default depends hue
# MARKERS: {matplotlib marker or list}
# HEIGHT:{scalar} -- height (in inches) of each facet
# ASPECT: {scalar} -- aspect * height gives the width (in inches) of each facet
```

View file

@ -0,0 +1,167 @@
# [Beautiful Soup Library](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
## Making the Soup
```py
from bs4 import BeautifulSoup
import requests
import lxml # better html parser than built-in
response = requests.get("url") # retuire a web page
soup = BeautifulSoup(response.text, "html.parser") # parse HTML from response w/ python default HTML parser
soup = BeautifulSoup(response.text, "lxml") # parse HTML from response w/ lxml parser
soup.prettify() # prettify parsed HTML for display
```
## Kinds of Objects
Beautiful Soup transforms a complex HTML document into a complex tree of Python objects.
### Tag
A Tag object corresponds to an XML or HTML tag in the original document
```py
soup = BeautifulSoup('<b class="boldest">Extremely bold</b>', 'html.parser') # parse HTML/XML
tag = soup.b
type(tag) # <class 'bs4.element.Tag'>
print(tag) # <b class="boldest">Extremely bold</b>
tag.name # tag name
tag["attribute"] # access to ttag attribute values
tag.attrs # dict of attribue-value pairs
```
### Navigable String
A string corresponds to a bit of text within a tag. Beautiful Soup uses the `NavigableString` class to contain these bits of text.
## Navigating the Tree
### Going Down
```py
soup.<tag>.<child_tag> # navigate using tag names
<tag>.contents # direct children as a list
<tag>.children # direct children as a genrator for iteration
<tag>.descendats # iterator over all childered, recusive
<tag>.string # tag contents, does not have further children
# If a tags only child is another tag, and that tag has a .string, then the parenttag is considered to have the same .string as its child
# If a tag contains more than one thing, then its not clear what .string should refer to, so .string is defined to be None
<tag>.strings # generattor to iterate over all children's strings (will list white space)
<tag>.stripped_strings # generattor to iterate over all children's strings (will NOT list white space)
```
### Going Up
```py
<tag>.parent # tags direct parent (BeautifleSoup has parent None, html has parent BeautifulSoup)
<tag>.parents # iterable over all parents
```
### Going Sideways
```py
<tag>.previous_sibling
<tag>.next_sibling
<tag>.previous_siblings
<tag>.next_siblings
```
### Going Back and Forth
```py
<tag>.previous_element # whatever was parsed immediately before
<tag>.next_element # whatever was parsed immediately afterwards
<tag>.previous_elements # whatever was parsed immediately before as a list
<tag>.next_elements # whatever was parsed immediately afterwards as a list
```
## Searching the Tree
## Filter Types
```py
soup.find_all("tag") # by name
soup.find_all(["tag1", "tag2"]) # multiple tags in a list
soup.find_all(function) # based on a bool function
sopu.find_all(True) # Match everyting
```
## Methods
Methods arguments:
- `name` (string): tag to search for
- `attrs` (dict): attributte-value pai to search for
- `string` (string): search by string contents rather than by tag
- `limit` (int). limit number of results
- `**kwargs`: be turned into a filter on one of a tags attributes.
```py
find_all(name, attrs, recursive, string, limit, **kwargs) # several results
find(name, attrs, recursive, string, **kwargs) # one result
find_parents(name, attrs, string, limit, **kwargs) # several results
find_parent(name, attrs, string, **kwargs) # one result
find_next_siblings(name, attrs, string, limit, **kwargs) # several results
find_next_sibling(name, attrs, string, **kwargs) # one result
find_previous_siblings(name, attrs, string, limit, **kwargs) # several results
find_previous_sibling(name, attrs, string, **kwargs) # one result
find_all_next(name, attrs, string, limit, **kwargs) # several results
find_next(name, attrs, string, **kwargs) # one result
find_all_previous(name, attrs, string, limit, **kwargs) # several results
find_previous(name, attrs, string, **kwargs) # one result
soup("html_tag") # same as soup.find_all("html_tag")
soup.find("html_tag").text # text of the found tag
soup.select("css_selector") # search for CSS selectors of HTML tags
```
## Modifying the Tree
### Changing Tag Names an Attributes
```py
<tag>.name = "new_html_tag" # modify the tag type
<tag>["attribute"] = "value" # modify the attribute value
del <tag>["attribute"] # remove the attribute
soup.new_tag("name", <attribute> = "value") # creat a new tag with specified name and attributes
<tag>.string = "new content" # modify tag text content
<tag>.append(item) # append to Tag content
<tag>.extend([item1, item2]) # add every element of the list in order
<tag>.insert(position: int, item) # like .insert in Python list
<tag>.insert_before(new_tag) # insert tags or strings immediately before something else in the parse tree
<tag>.insert_after(new_tag) # insert tags or strings immediately before something else in the parse tree
<tag>.clear() # remove all tag's contents
<tag>.extract() # extract and return the tag from the tree (operates on self)
<tag>.string.extract() # extract and return the string from the tree (operates on self)
<tag>.decompose() # remove a tag from the tree, then completely destroy it and its contents
<tag>.decomposed # check if tag has be decomposed
<tag>.replace_with(item) # remove a tag or string from the tree, and replaces it with the tag or string of choice
<tag>.wrap(other_tag) # wrap an element in the tag you specify, return the new wrapper
<tag>.unwrap() # replace a tag with whatevers inside, good for stripping out markup
<tag>.smooth() # clean up the parse tree by consolidating adjacent strings
```

View file

@ -0,0 +1,148 @@
# Flask
```python
from flask import Flask, render_template
app = Flask(__name__, template_folder="path_to_folder") # create app
# template folder contains html pages
@app.route("/") # define URLs
def index():
return render_template("index.html") # parse HTML page and return it
if __name__ == "__main__":
# run server if server is single file
app.run(debug=True, host="0.0.0.0")
```
`@app.route("/page/")` enables to access the page with `url/page` and `url/page/`. The same is possible using `app.add_url_rule("/", "page", function)`.
## Variable Rules
You can add variable sections to a URL by marking sections with `<variable_name>`.
Your function then receives the `<variable_name>` as a keyword argument.
Optionally, you can use a converter to specify the type of the argument like `<converter:variable_name>`.
Converter Type | Accepts
---------------|------------------------------
`string` | any text without a slash (default option)
`int` | positive integers
`float` | positive floating point values
`path` | strings with slashes
`uuid` | UUID strings
```python
@app.route("/user/<string:username>") # hanle URL at runtime
def profile(username):
return f"{escape(username)}'s profile'"
```
## Redirection
`url_for(endpoint, **values)` is used to redirect passing keyeworderd arguments. It can be used in combination with `@app.route("/<value>")` to accept the paassed arguments.
```py
from flask import Flask, redirect, url_for
@app.route("/url")
def func():
return redirect(url_for("html_file/function")) # redirect to other page
```
## Jinja Template Rendering (Parsing Python Code in HTML, CSS)
* `{% ... %}` for **Statements**
* `{{ ... }}` for **Expressions** to print to the template output
* `{# ... #}` for **Comments** not included in the template output
* `# ... ##` for **Line Statements**
Use `{% block block_code %}` to put a line python code inside HTML.
Use `{% end<block> %}` to end a block of code.
In `page.html`;
```py
<html>
{% for item in content %}
<p>{{item}}</p>
{% endfor %}
</html>
```
In `file.py`:
```py
@app.route("/page/)
def func():
return render_template("page.html", content=["A", "B", "C"])
```
### Hyperlinks
In `file.py`:
```py
@app.route('/linked_page/')
def cool_form():
return render_template('linked_page.html')
```
In `page.html`:
```html
<!doctype html>
<html>
<head>
</head>
<body>
<a href="{{ url_for('linked_page') }}">link text</a>
</body>
</html>
```
### CSS
Put `style.css` inside `/static/style`.
In `page.html`:
```html
<!doctype html>
<html>
<head>
</head>
<link rel="stylesheet" href="{{ url_for('static', filename='style/style.css') }}">
<body>
</body>
</html>
```
## Template Inheritance
In `parent_template.html`:
```html
<html>
<!-- html content -->
{% block block_name %}
{% endblock %}
<!-- html content -->
</html>
```
The content of the block will be filled by the child class.
In `child_template.hmtl`:
```html
{% extends "parent_template.html" %}
{% block block_name}
{{ super() }} <!-- use parent's contents -->
<!-- block content -->
{% endblock %}
```

View file

@ -0,0 +1,34 @@
# Flask Requests
Specify allowed HTTP methods in `file.py`:
`@app.route("/page/", methods=["allowed methods"])`
## Forms
in `file.py`:
```py
from flask import Flask, render_template
from flask.globals import request
@app.route("/login/", methods=["GET", "POST"])
def login():
if request.method == "POST": # if POST then form has been filled
data = request.form["field name"] # store the form's data in variable
# manipulate form data
req_args = request.args # request args
else: # if GET then is asking for form page
return render_template("login.html")
```
In `login.html`:
```html
<html>
<!-- action="#" goes to page itsef but with # at the end of the URL -->
<form action="#" method="post">
<input type="text" name="field name">
</html>
```

146
Python/Libs/Web/requests.md Normal file
View file

@ -0,0 +1,146 @@
# Requests Lib
## GET REQUEST
Get or retrieve data from specified resource
```py
response = requests.get('URL') # returns response object
# PAYLOAD -> valuable information of response
response.status_code # http status code
```
The response message consists of:
- status line which includes the status code and reason message
- response header fields (e.g., Content-Type: text/html)
- empty line
- optional message body
```text
1xx -> INFORMATIONAL RESPONSE
2xx -> SUCCESS
200 OK -> request succesful
3xx -> REDIRECTION
4xx -> CLIENT ERRORS
404 NOT FOUND -> resource not found
5xx -> SERVER ERRORS
```
```py
# raise exception HTTPError for error status codes
response.raise_for_status()
response.content # raw bytes of payload
response.encoding = 'utf-8' # specify encoding
response.text # string payload (serialized JSON)
response.json() # dict of payload
response.headers # response headers (dict)
```
### QUERY STRING PARAMETERS
```py
response = requests.get('URL', params={'q':'query'})
response = requests.get('URL', params=[('q', 'query')])
response = requests.get('URL', params=b'q=query')
```
### REQUEST HEADERS
```py
response = requests.get(
'URL',
params={'q': 'query'},
headers={'header': 'header_query'}
)
```
## OTHER HTTP METHODS
### DATA INPUT
```py
# requests that entity enclosed be stored as a new subordinate of the web resource identified by the URI
requests.post('URL', data={'key':'value'})
# requests that the enclosed entity be stored under the supplied URI
requests.put('URL', data={'key':'value'})
# applies partial modification
requests.patch('URL', data={'key':'value'})
# deletes specified resource
requests.delete('URL')
# ask for a response but without the response body (only headers)
requests.head('URL')
# returns supported HTTP methods of the server
requests.options('URL')
```
### SENDING JSON DATA
```py
requests.post('URL', json={'key': 'value'})
```
### INSPECTING THE REQUEST
```py
# requests lib prepares the requests nefore sending it
response = requests.post('URL', data={'key':'value'})
response.request.something # inspect request field
```
## AUTHENTICATION
```py
requests.get('URL', auth=('uesrname', 'password')) # use implicit HTTP Basic Authorization
# explicit HTTP Basic Authorization and other
from requests.auth import HTTPBasicAuth, HTTPDigestAuth, HTTPProxyAuth
from getpass import getpass
requests.get('URL', auth=HTTPBasicAuth('username', getpass()))
```
### PERSOANLIZED AUTH
```py
from requests.auth import AuthBase
class TokenAuth(AuthBase):
"custom authentication scheme"
def __init__(self, token):
self.token = token
def __call__(self, r):
"""Attach API token to custom auth"""
r.headers['X-TokenAuth'] = f'{self.token}'
return r
requests.get('URL', auth=TokenAuth('1234abcde-token'))
```
### DISABLING SSL VERIFICATION
```py
requests.get('URL', verify=False)
```
## PERFORMANCE
### REQUEST TIMEOUT
```py
# raise Timeout exception if request times out
requests.get('URL', timeout=(connection_timeout, read_timeout))
```
### MAX RETRIES
```py
from requests.adapters import HTTPAdapter
URL_adapter = HTTPAdapter(max_retries = int)
session = requests.Session()
# use URL_adapter for all requests to URL
session.mount('URL', URL_adapter)
```

1839
Python/Python.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,215 @@
# Argpasrse Module
## Creating a parser
```py
import argparse
parser = argparse.ArgumentParser(description="description", allow_abbrev=True)
```
**Note**: All parameters should be passed as keyword arguments.
- `prog`: The name of the program (default: `sys.argv[0]`)
- `usage`: The string describing the program usage (default: generated from arguments added to parser)
- `description`: Text to display before the argument help (default: none)
- `epilog`: Text to display after the argument help (default: none)
- `parents`: A list of ArgumentParser objects whose arguments should also be included
- `formatter_class`: A class for customizing the help output
- `prefix_chars`: The set of characters that prefix optional arguments (default: -)
- `fromfile_prefix_chars`: The set of characters that prefix files from which additional arguments should be read (default: None)
- `argument_default`: The global default value for arguments (default: None)
- `conflict_handler`: The strategy for resolving conflicting optionals (usually unnecessary)
- `add_help`: Add a -h/--help option to the parser (default: True)
- `allow_abbrev`: Allows long options to be abbreviated if the abbreviation is unambiguous. (default: True)
## [Adding Arguments](https://docs.python.org/3/library/argparse.html#the-add-argument-method)
```py
ArgumentParser.add_argument("name_or_flags", nargs="...", action="...")
```
**Note**: All parameters should be passed as keyword arguments.
- `name or flags`: Either a name or a list of option strings, e.g. `foo` or `-f`, `--foo`.
- `action`: The basic type of action to be taken when this argument is encountered at the command line.
- `nargs`: The number of command-line arguments that should be consumed.
- `const`: A constant value required by some action and nargs selections.
- `default`: The value produced if the argument is absent from the command line.
- `type`: The type to which the command-line argument should be converted to.
- `choices`: A container of the allowable values for the argument.
- `required`: Whether or not the command-line option may be omitted (optionals only).
- `help`: A brief description of what the argument does.
- `metavar`: A name for the argument in usage messages.
- `dest`: The name of the attribute to be added to the object returned by `parse_args()`.
### Actions
`store`: This just stores the arguments value. This is the default action.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo')
>>> parser.parse_args('--foo 1'.split())
Namespace(foo='1')
```
`store_const`: This stores the value specified by the const keyword argument. The `store_const` action is most commonly used with optional arguments that specify some sort of flag.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', action='store_const', const=42)
>>> parser.parse_args(['--foo'])
Namespace(foo=42)
```
`store_true` and `store_false`: These are special cases of `store_const` used for storing the values True and False respectively. In addition, they create default values of False and True respectively.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', action='store_true')
>>> parser.add_argument('--bar', action='store_false')
>>> parser.add_argument('--baz', action='store_false')
>>> parser.parse_args('--foo --bar'.split())
Namespace(foo=True, bar=False, baz=True)
```
`append`: This stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. Example usage:
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', action='append')
>>> parser.parse_args('--foo 1 --foo 2'.split())
Namespace(foo=['1', '2'])
```
`append_const`: This stores a list, and appends the value specified by the const keyword argument to the list. (Note that the const keyword argument defaults to None.) The `append_const` action is typically useful when multiple arguments need to store constants to the same list. For example:
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--str', dest='types', action='append_const', const=str)
>>> parser.add_argument('--int', dest='types', action='append_const', const=int)
>>> parser.parse_args('--str --int'.split())
Namespace(types=[<class 'str'>, <class 'int'>])
```
`count`: This counts the number of times a keyword argument occurs. For example, this is useful for increasing verbosity levels:
**Note**: the default will be None unless explicitly set to 0.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--verbose', '-v', action='count', default=0)
>>> parser.parse_args(['-vvv'])
Namespace(verbose=3)
```
`help`: This prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser.
`version`: This expects a version= keyword argument in the add_argument() call, and prints version information and exits when invoked:
```py
>>> import argparse
>>> parser = argparse.ArgumentParser(prog='PROG')
>>> parser.add_argument('--version', action='version', version='%(prog)s 2.0')
>>> parser.parse_args(['--version'])
PROG 2.0
```
`extend`: This stores a list, and extends each argument value to the list. Example usage:
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument("--foo", action="extend", nargs="+", type=str)
>>> parser.parse_args(["--foo", "f1", "--foo", "f2", "f3", "f4"])
Namespace(foo=['f1', 'f2', 'f3', 'f4'])
```
### Nargs
ArgumentParser objects usually associate a single command-line argument with a single action to be taken.
The `nargs` keyword argument associates a different number of command-line arguments with a single action.
**Note**: If the nargs keyword argument is not provided, the number of arguments consumed is determined by the action.
`N` (an integer): N arguments from the command line will be gathered together into a list.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs=2)
>>> parser.add_argument('bar', nargs=1)
>>> parser.parse_args('c --foo a b'.split())
Namespace(bar=['c'], foo=['a', 'b'])
```
**Note**: `nargs=1` produces a list of one item. This is different from the default, in which the item is produced by itself.
`?`: One argument will be consumed from the command line if possible, and produced as a single item. If no command-line argument is present, the value from default will be produced.
For optional arguments, there is an additional case: the option string is present but not followed by a command-line argument. In this case the value from const will be produced.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs='?', const='c', default='d')
>>> parser.add_argument('bar', nargs='?', default='d')
>>> parser.parse_args(['XX', '--foo', 'YY'])
Namespace(bar='XX', foo='YY')
>>> parser.parse_args(['XX', '--foo'])
Namespace(bar='XX', foo='c')
>>> parser.parse_args([])
Namespace(bar='d', foo='d')
```
`*`: All command-line arguments present are gathered into a list. Note that it generally doesnt make much sense to have more than one positional argument with `nargs='*'`, but multiple optional arguments with `nargs='*'` is possible.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs='*')
>>> parser.add_argument('--bar', nargs='*')
>>> parser.add_argument('baz', nargs='*')
>>> parser.parse_args('a b --foo x y --bar 1 2'.split())
Namespace(bar=['1', '2'], baz=['a', 'b'], foo=['x', 'y'])
```
`+`: All command-line args present are gathered into a list. Additionally, an error message will be generated if there wasnt at least one command-line argument present.
```py
>>> parser = argparse.ArgumentParser(prog='PROG')
>>> parser.add_argument('foo', nargs='+')
>>> parser.parse_args(['a', 'b'])
Namespace(foo=['a', 'b'])
>>> parser.parse_args([])
usage: PROG [-h] foo [foo ...]
PROG: error: the following arguments are required: foo
```
`argparse.REMAINDER`: All the remaining command-line arguments are gathered into a list. This is commonly useful for command line utilities that dispatch to other command line utilities.
```py
>>> parser = argparse.ArgumentParser(prog='PROG')
>>> parser.add_argument('--foo')
>>> parser.add_argument('command')
>>> parser.add_argument('args', nargs=argparse.REMAINDER)
>>> print(parser.parse_args('--foo B cmd --arg1 XX ZZ'.split()))
Namespace(args=['--arg1', 'XX', 'ZZ'], command='cmd', foo='B')
```
## Parsing Arguments
```py
# Convert argument strings to objects and assign them as attributes of the namespace. Return the populated namespace.
ArgumentParser.parse_args(args=None, namespace=None)
# assign attributes to an already existing object, rather than a new Namespace object
class C:
pass
c = C()
parser = argparse.ArgumentParser()
parser.add_argument('--foo')
parser.parse_args(args=['--foo', 'BAR'], namespace=c)
c.foo # BAR
# return a dict instead of a Namespace
args = parser.parse_args(['--foo', 'BAR'])
vars(args) # {'foo': 'BAR'}
```

View file

@ -0,0 +1,78 @@
# Collections Module
```py
# COUNTER()
# sottoclasse dizionario per contare oggetti hash-abili
from collections import Counter
Counter(sequenza) # -> oggetto Counter
# {item: num comparese in sequenza, ...}
var = Counter(sequenza)
var.most_common(n) # produce lista degli elementi più comuni (n più comuni)
sum(var.values()) # totale di tutti i conteggi
var.clear() #reset tutti i conteggi
list(var) # elenca elementi unici
set(var) # converte in un set
dict(var) # converte in un dizionario regolare
var.items() # converte in una lista di coppie (elemento, conteggio)
Counter(dict(list_of_pairs)) # converte da una lista di coppie
var.most_common[:-n-1:-1] # n elementi meno comuni
var += Counter() # rimuove zero e conteggi negativi
# DEFAULTDICT()
# oggetto simil-dizionario che come primo argomento prende un tipo di default
# defaultdict non solleverà mai un eccezione KeyError.
# le chiavi non esistenti ritornano un valore di default (default_factory)
from collections import defaultdict
var = defaultdict(default_factory)
var.popitem() # rimuove e restituisce primo elemento
var.popitem(last=True) # rimuove e restituisce ultimo elemento
# OREDERDDICT()
# sottoclasse dizionario che "ricorda" l'ordine in cui vengono inseriti i contenuti
# dizionari normali hanno ordinamento casuale
nome_dict = OrderedDict()
# OrderedDict con stessi elementi ma ordine diverso sono considerati diversi
# USERDICT()
# implementazione pura in pythondi una mappa che funziona come un normale dizionario.
# Designata per creare sottoclassi
UserDict.data # recipiente del contenuto di UserDict
# NAMEDTUPLE()
# ogni namedtuple è rappresentata dalla propria classe
from collections import namedtuple
NomeClasse = namedtuple(NomeClasse, parametri_separati_da_spazio)
var = NomeClasse(parametri)
var.attributo # accesso agli attributi
var[index] # accesso agli attributi
var._fields # accesso ad elenco attributi
var = classe._make(iterabile) # trasformain namedtuple
var._asdict() # restituisce oggetto OrderedDict a partire dalla namedtuple
# DEQUE()
# double ended queue (pronunciato "deck")
# lista modificabuile da entrambi i "lati"
from collections import deque
var = deque(iterabile, maxlen=num) # -> oggetto deque
var.append(item) # aggiunge item al fondo
var.appendleft(item) # aggiunge item all'inizio
var.clear() # rimuove tutti gli elementi
var.extend(iterabile) # aggiunge iterabile al fondo
var.extendleft(iterabile) # aggiunge iterabile all'inizio'
var.insert(index, item) # inserisce in posizione index
var.index(item, start, stop) # restituisce posizione di item
var.count(item)
var.pop()
var.popleft()
var.remove(valore)
var.reverse() # inverte ordine elementi
var.rotate(n) # sposta gli elementi di n step (dx se n > 0, sx se n < 0)
var.sort()
```

View file

@ -0,0 +1,82 @@
# CSV Module Cheat Sheet
```python
# itera linee di csvfile
.reader(csvfile, dialect, **fmtparams) --> oggetto reader
# METODI READER
.__next__() # restituisce prossima riga dell'oggetto iterabile come una lista o un dizionario
# ATTRIBUTI READER
dialect # descrizione read-only del dialec usato
line_num # numero di linee dall'inizio dell'iteratore
fieldnames
# converte data in stringhe delimitate
# csvfile deve supportare .write()
#tipo None convertito a stringa vuota (semplifica dump di SQL NULL)
.writer(csvfile, dialect, **fmtparams) --> oggetto writer
# METODI WRITER
# row deve essere iterabile di stringhe o numeri oppure dei dizionari
.writerow(row) # scrive row formattata secondo il dialect corrente
.writerows(rows) # scrive tutti gli elementi in rows formattati secondo il dialect corrente. rows è iterdabile di row
# METODI CSV
# associa dialect a name (name deve essere stringa)
.register_dialect(name, dialect, **fmtparams)
# elimina il dialect associato a name
.unregister_dialect()
# restituisce il dialet associato a name
.get_dialect(name)
# elenco dialec associati a name
.list_dialect(name)
# restituisce (se vuoto) o setta il limite del campo del csv
.field_size_limit(new_limit)
'''
csvfile --oggetto iterabile restituente una string ad ogni chiamata di __next__()
se csv è un file deve essere aperto con newline='' (newline universale)
dialect --specifica il dialetto del csv (Excel, ...) (OPZIONALE)
fmtparams --override parametri di formattazione (OPZIONALE) https://docs.python.org/3/library/csv.html#csv-fmt-params
'''
# oggetto operante come reader ma mappa le info in ogni riga in un OrderedDict le cui chiavi sono opzionali e passate tramite fieldnames
class csv.Dictreader(f, fieldnames=None, restket=none, restval=None, dialect, *args, **kwargs)
'''
f --file da leggere
fieldnames --sequenza, definisce i nomi dei campi del csv. se omesso usa la prima linea di f
restval, restkey --se len(row) > fieldnames dati in eccesso memorizzati in restval e restkey
parametri aggiuntivi passati a istanza reader sottostante
'''
class csv.DictWriter(f, fieldnames, restval='', extrasaction, dialect, *args, **kwargs)
'''
f --file da leggere
fieldnames --sequenza, definisce i nomi dei campi del csv. (NECESSARIO)
restval --se len(row) > fieldnames dati in eccesso memorizzati in restval e restkey
extrasaction --se il dizionario passato a writerow() contiene key non presente in fieldnames extrasaction decide azione da intraprendere (raise causa valueError, ignore ignora le key aggiuntive)
parametri aggiuntivi passati a istanza writer sottostante
'''
# METODI DICTREADER
.writeheader() # scrive una riga di intestazione di campi come specificato da fieldnames
# classe usata per dedurre il formato del CSV
class csv.Sniffer
.sniff(campione, delimiters=None) #analizza il campione e restituisce una classe Dialect. delimiter è sequenza di possibili delimitatori di caselle
.has_header(campione) --> bool # True se prima riga è una serie di intestazioni di colonna
#COSTANTI
csv.QUOTE_ALL # indica a writer di citere (" ") tutti i campi
csv.QUOTE_MINIMAL # indica a write di citare solo i campi contenenti caratteri speciali come delimiter, quotechar ...
csv.QUOTE_NONNUMERIC # indica al vriter di citare tutti i campi non numerici
csv.QUOTE_NONE # indica a write di non citare mai i campi
```

View file

@ -0,0 +1,71 @@
# Ftplib Module Cheat Sheet
## FTP CLASSES
```py
# restiuisce istanza classe FTP
ftplib.FTP(host="", user="", password="", acct="")
# se HOST fornito esegue connect(host)
# SE USER fornito esegue login(user, password, acct)
# sottoclasse FTP con TLS
ftplib.FTP_TLS(host="", user="", password="", acct="")
```
## EXCEPTIONS
```py
ftplib.error_reply # unexpected error from server
ftplib.error_temp # temporary error (response codes 400-499)
ftplib.error_perm # permanent error (response codes 500-599)
ftplib.error_proto # error not in ftp specs
ftplib.all_errors # tuple of all exceptions
```
## FTP OBJECTS
```py
# method on text files: -lines
# method on binary files: -binary
# CONNECTION
FTP.connect(host="", port=0) # used unce per instance
# DONT CALL if host was supplied at instance creation
FTP.getwelcome() # return welcome message
FTP.login(user='anonymous', password='', acct='')
# called unce per instance after connection is established
# DEAFAULT PASSWORD: anonymous@
# DONT CALL if host was supplied at instance creation
FTP.sendcmd(cmd) # send command string and return response
FTP.voidcmd(cmd) # send command string and return nothing if successful
# FILE TRANSFER
FTP.abort() # abort in progress file transfer (can fail)
FTTP.transfercmd(cmd, rest=None) # returns socket for connection
# CMD avtive mode: send EPRT or PORT command and CMD and accept connection
# CMD passive mode: send EPSV or PASV and start transfer command
FTP.retrbinary(cmd, callback, blocksize=8192, rest=None) # retrieve file in binary mode
# CMD: appropriate RETR comkmand ('RETR filename')
# CALLBACK: func called on every block of data received
FTP.rertlines(cmd, callback=None)
# retrieve file or dir list in ASCII transfer mode
# CMD: appropriate RETR, LSIT (list and info of files), NLST ( list of file names)
# DEFAULT CALLBACK: sys.stdout
FTP.set_pasv(value) # set passive mode if value is true, otherwise disable it
# passive mode on by deafultù
FTP.storbinary(cmd, fp, blocksize=8192, callback=None, rest=None) # store file in binary mode
# CMD: appropriate STOR command ('STOR filename')
# FP: {file object in binary mode} read until EOF in blocks of blocksize
# CLABBACK: func called on each bloak after sending
FTP.storlines(cmd, fp, callback=None) # store file in ASCII transfer mode
# CMD: appropriate STOR command ('STOR filename')
# FP: {file object} read until EOF
# CLABBACK: func called on each bloak after sending
```

View file

@ -0,0 +1,42 @@
# Functools Module Cheat Sheet
Hiegher-order functions and operations on callable objects.
```py
# crea nuova funzione con argomenti (*args, **kwarg) parzialmente fissati
new_func = partial(func, *args, **kwargs)
# crea nuovo metodo con argomenti (*args, **kwarg) parzialmente fissati
new_method = partialmethod(func, *args, **kwargs)
# applica ripetutamente funzione( , ) all'iterabile per creare un output singolo
# funzione applicata ai primi due elementi
# restituisce inizzializzatore se l'iterabile è vuoto (dipendente dalla funzione)
reduce(funzione((arg_1, arg_2), iterabile, inizzializzatore) # -> singolo output
# decoratore che salva maxsixe:int chiamate recenti in cache
# utilizza dizionario per memorizzazione, argomenti (posizionali e keyworded devono essere hashabili)
# se maxsixe=None cache cresce indefinitivamente e feature LRU è disattivata
# LRU --> Least Recent Used. Elementi poco usati rimossi dalla cache
# per efficienza maxsize=2**n
@lru_cache(maxsize=128, typed=False)
# decoratore che trasforma la funzione in una single-dispatch generic function
# generic function --> singola funzione implementa la stessa operazione per tipi diversi (ALTERNATIVA A METHOD OVERLOAD)
# single dispatch --> forma di generic function in cui l'implementazione è decissa in base ad un singolo argomento
# ATTENZIONE: single dispatch deciso dal primo argomento
@singledispatch # crea decorated_func.register per raggruppare funzioni in una generic function
@decorated_func.register() # decide implementazione basandosi su type annotation
@decorated_func.register(type) # decide implementazione secondo argomento type (da usare se non è presente type annotation)
# il nome di decorated_func è irrilevante
# è utile usare register(type) su ABC per supportare classi più generiche e classi future
# decoratore per aggiornare wrapper function per apparire come wrapperd function
# funz_decoratrice mantiene argomenti e docstring della funzione decorata
def decorator(funzione):
@wraps(funzione)
def wrapper(): #funz_decoratrice dentro decorator
# crea operatori uguaglianza se classe ne implementa almeno uno e __eq__()
@total_ordering
```

View file

@ -0,0 +1,77 @@
# Itertools Module
```py
# iteratore restituisce somma cumulativa, se presente viene applicata func( , )
# accumulate([1,2,3,4,5]) -> 1, 3(1+2), 6(1+2+3), 10(1+2+3+6), 15(1+2+3+4+5)
# accumulate(iter, func( , )) -> iter[0], func(iter[0] + iter[1]) + func(ris_prec + iter[2]), ...
accumulate(iterabile, func(_, _))
# iteratore restituisce elemenmti dal primo iterabile,
# poi procede al successivo fino alla fine degli iterabili
# non funziona se l'iterabile è uno solo
chain(*iterabili)
ChainMap(*maps) # A ChainMap groups multiple dicts or other mappings together to create a single, updateable view. Lookups search the underlying mappings successively until a key is found. A ChainMap incorporates the underlying mappings by reference. So, if one of the underlying mappings gets updated, those changes will be reflected in ChainMap
# concatena elementi del singolo iterabile anche se contiene sequenze
chain.from_iterable(iterabile)
# restituisce sequenze di lunghezza r a partire dall'iterabile
# elementi trattati come unici in base al loro valore
combinations(iterabile, r)
# # restituisce sequenze di lunghezza r a partire dall'iterabile permettendo la ripetizione degli elementi
combinations_with_replacement(iterabile, r)
# iteratore filtra elementi di data restituenso solo quelli che hanno
# un corrispondente elemento in selectors che ha valore vero
compress(data, selectors)
# iteratore restituiente valori equidistanti a partire da start
#! ATTENZIONE: sequenza numerica infinita
count(start, step)
# iteratore restituiente valori in sequenza infinita
cycle(iterabile)
# iteratore droppa elementi dell'iterabile finchè il predicato è vero
dropwhile(predicato, iterabile)
# iteratore restituiente valori se il predicato è falso
filterfalse(predicato, iterabile)
# iteratore restituisce tuple (key, group)
# key è il criterio di raggruppamento
# group è un generatore restituiente i membri del gruppo
groupby(iterabile, key=None)
# iteratore restituisce slice dell'iterabile
isslice(iterable, stop)
isslice(iterable, start, stop, step)
# restituisce tutte le permutazioni di lunghezza r dell'iterabile
permutations(iterabile, r=None)
# prodotto cartesiano degli iterabili
# cicla iterabili in ordine di input
# [product('ABCD', 'xy') -> Ax Ay Bx By Cx Cy Dx Dy]
# [product('ABCD', repeat=2) -> AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD]
product(*iterabili, ripetizioni=1)
# restituisce un oggetto infinite volte se ripetizioni non viene specificato
repeat(oggetto, ripetizioni)
# iteratore computa func(iterabile)
# usato se iterabile è sequenza pre-zipped (seq di tuple raggruppanti elementi)
starmap(func, iterabile)
# iteratore restituiente valori da iterabile finchè predicato è vero
takewhile(predicato, iterabile)
# restituisce n iteratori indipendenti dal singolo iterabile
tee(iterabile, n=2)
# produce un iteratore che aggrega elementi da ogni iterabile
# se gli iterabili hanno lunghezze differenti i valori mancanti sono riempiti secondo fillervalue
zip_longest(*iterabile, fillvalue=None)
```

View file

@ -0,0 +1,110 @@
# JSON Module Cheat Sheet
## JSON Format
JSON (JavaScript Object Notation) is a lightweight data-interchange format.
It is easy for humans to read and write.
It is easy for machines to parse and generate.
JSON is built on two structures:
- A collection of name/value pairs.
- An ordered list of values.
An OBJECT is an unordered set of name/value pairs.
An object begins with `{` (left brace) and ends with `}` (right brace).
Each name is followed by `:` (colon) and the name/value pairs are separated by `,` (comma).
An ARRAY is an ordered collection of values.
An array begins with `[` (left bracket) and ends with `]` (right bracket).
Values are separated by `,` (comma).
A VALUE can be a string in double quotes, or a number,
or true or false or null, or an object or an array.
These structures can be nested.
A STRING is a sequence of zero or more Unicode characters,
wrapped in double quotes, using backslash escapes.
A CHARACTER is represented as a single character string.
A STRING is very much like a C or Java string.
A NUMBER is very much like a C or Java number,
except that the octal and hexadecimal formats are not used.
WHITESPACE can be inserted between any pair of tokens.
## Usage
```python
# serialize obj as JSON formatted stream to fp
json.dump(obj, fp, cls=None, indent=None, separators=None, sort_keys=False)
# CLS: {custom JSONEncoder} -- specifies custom encoder to be used
# INDENT: {int > 0, string} -- array elements, object members pretty-printed with indent level
# SEPARATORS: {tuple} -- (item_separator, key_separator)
# [default: (', ', ': ') if indent=None, (',', ':') otherwise],
# specify (',', ':') to eliminate whitespace
# SORT_KEYS: {bool} -- if True dict sorted by key
# serialize obj as JSON formatted string
json.dumps(obj, cls=None, indent=None, separators=None, sort_keys=False)
# CLS: {custom JSONEncoder} -- specifies custom encoder to be used
# INDENT: {int > 0, string} -- array elements, object members pretty-printed with indent level
# SEPARATORS: {tuple} -- (item_separator, key_separator)
# [default: (', ', ': ') if indent=None, (',', ':') otherwise],
# specify (',', ':') to eliminate whitespace
# SORT_KEYS: {bool} -- if True dict sorted by key
# deserialize fp to python object
json.load(fp, cls=None)
# CLS: {custom JSONEncoder} -- specifies custom decoder to be used
# deserialize s (string, bytes or bytearray containing JSON doc) to python object
json.loads(s, cls=None)
# CLS: {custom JSONEncoder} -- specifies custom decoder to be used
```
## Default Decoder (`json.JSONDecoder()`)
Convertions (JSON -> Python):
- object -> dict
- array -> list
- string -> str
- number (int) -> int
- number (real) -> float
- true -> True
- false -> False
- null -> None
## Default Encoder (`json.JSONEncoder()`)
Convertions (Python -> Json):
- dict -> object
- list, tuple -> array
- str -> string
- int, float, Enums -> number
- True -> true
- False -> false
- None -> null
## Extending JSONEncoder (Example)
```python
import json
class ComplexEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, complex):
return [obj.real, obj.imag]
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
```
## Retrieving Data from json dict
```python
data = json.loads(json)
data["key"] # retieve the value associated with the key
data["outer key"]["nested key"] # nested key value retireval
```

View file

@ -0,0 +1,85 @@
# Logging Module Cheat Sheet
## Configuration
```python
# basic configuration for the logging system
logging.basicConfig(filename="relpath", level=logging.LOG_LEVEL, format=f"message format", **kwargs)
# DATEFMT: Use the specified date/time format, as accepted by time.strftime().
# create a logger with a name (useful for having multiple loggers)
logger = logging.getLogger(name="logger name")
logger.level # LOG_LEVEL for this logger
# disable all logging calls of severity level and below
# alternative to basicConfig(level=logging.LOG_LEVEL)
logging.disable(level=LOG_LEVEL)
```
### Format (`basicConfig(format="")`)
| Attribute name | Format | Description |
|----------------|-------------------|-------------------------------------------------------------------------------------------|
| asctime | `%(asctime)s` | Human-readable time when the LogRecord was created. Modified by `basicConfig(datefmt="")` |
| created | `%(created)f` | Time when the LogRecord was created (as returned by `time.time()`). |
| filename | `%(filename)s` | Filename portion of pathname. |
| funcName | `%(funcName)s` | Name of function containing the logging call. |
| levelname | `%(levelname)s` | Text logging level for the message. |
| levelno | `%(levelno)s` | Numeric logging level for the message. |
| lineno | `%(lineno)d` | Source line number where the logging call was issued (if available). |
| message | `%(message)s` | The logged message, computed as `msg % args`. |
| module | `%(module)s` | Module (name portion of filename). |
| msecs | `%(msecs)d` | Millisecond portion of the time when the LogRecord was created. |
| name | `%(name)s` | Name of the logger used to log the call. |
| pathname | `%(pathname)s` | Full pathname of the source file where the logging call was issued (if available). |
| process | `%(process)d` | Process ID (if available). |
| processName | `%(processName)s` | Process name (if available). |
| thread | `%(thread)d` | Thread ID (if available). |
| threadName | `%(threadName)s` | Thread name (if available). |
### Datefmt (`basicConfig(datefmt="")`)
| Directive | Meaning |
|-----------|------------------------------------------------------------------------------------------------------------------------------|
| `%a` | Locales abbreviated weekday name. |
| `%A` | Locales full weekday name. |
| `%b` | Locales abbreviated month name. |
| `%B` | Locales full month name. |
| `%c` | Locales appropriate date and time representation. |
| `%d` | Day of the month as a decimal number [01,31]. |
| `%H` | Hour (24-hour clock) as a decimal number [00,23]. |
| `%I` | Hour (12-hour clock) as a decimal number [01,12]. |
| `%j` | Day of the year as a decimal number [001,366]. |
| `%m` | Month as a decimal number [01,12]. |
| `%M` | Minute as a decimal number [00,59]. |
| `%p` | Locales equivalent of either AM or PM. |
| `%S` | Second as a decimal number [00,61]. |
| `%U` | Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. |
| `%w` | Weekday as a decimal number [0(Sunday),6]. |
| `%W` | Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. |
| `%x` | Locales appropriate date representation. |
| `%X` | Locales appropriate time representation. |
| `%y` | Year without century as a decimal number [00,99]. |
| `%Y` | Year with century as a decimal number. |
| `%z` | Time zone offset indicating a positive or negative time difference from UTC/GMT of the form +HHMM or -HHMM [-23:59, +23:59]. |
| `%Z` | Time zone name (no characters if no time zone exists). |
| `%%` | A literal '%' character. |
## Logs
Log Levels (Low To High):
- NOTSET -- 0
- DEBUG -- 10
- INFO -- 20
- WARNING -- 30
- ERROR -- 40
- CRITICAL -- 50
```python
logging.debug(msg) # Logs a message with level DEBUG on the root logger
logging.info(msg) # Logs a message with level INFO on the root logger
logging.warning(msg) # Logs a message with level WARNING on the root logger
logging.error(msg) # Logs a message with level ERROR on the root logger
logging.critical(msg) # Logs a message with level CRITICAL on the root logger
```

View file

@ -0,0 +1,61 @@
# Operator Module Cheat Sheet
```py
# OPERATORI CONFRONTO
__lt__(a, b), lt(a, b) # a < b
__le__(a, b), le(a, b) # a <= b
__eq__(a, b), eq(a, b) # a == b
__ne__(a, b), ne(a, b) # a != b
__ge__(a, b), ge(a, b) # a >= b
__gt__(a, b), gt(a, b) # a > b
# OPERATOTI LOGICI
not_(obj) # not obj
truth(obj) # True o Flase ini base a valore verità oggetto (come costruttore bool)
is_(a, b) # return a is b
is_not(a, b) # return a is not b
# OPARATORI BINARI E MATEMATICI
__abs__(a, b), abs(obj) # valore assoluto di obj
__add__(a, b), add(a, b) # a + b
__sub__(a, b), sub(a, b) # a - b
__mul__(a,b), mul(a,b) # a * b
__mul__(a, b), pow(a,b) # a ** b
__truediv__(a, b), truediv(a, b) # a / b
__floordiv__(a, b), floordiv(a, b) # return a // b
__mod__(a, b), mod(a, b) # a % b
__neg__(obj), neg(obj) # -obj
__index__(a), index(a) # converte a in intero
__and__(a, b), and_(a, b) # a and b binario (a & b)
__or__(a, b), or_(a, b) # a or b binario (a | b)
__xor__(a, b), xor(a, b) # a xor b binario (a ^ b)
__inv__(obj), inv(obj), __inverse__(obj), inverse(obj) # inverso binario di obj, ~obj
__lshift__(obj), lshift(a, b) # restituisce a spostato a sinistra di b
__concat__(a, b), concat(a, b) # a + b per sequenze (CONCATENZIONE)
__contains__(a, b), contains(a, b) # return b in a
countOf(a, b) # numero occorrenze b in a
indexOF(a, b) # restituisce prima occorrenza di b in a
__delitem__(a, b), delitem(a, b) # rimuove valore a in posizione b
__getitem__(a, b), getitem(a, b) # restituisce valore a in posizione b
__setitem__(a, b), setitem(a, b) # setta valore a in psoizione b
# ATTRGETTER
# restituisce oggetto chiamabile che recupera attributo attr da oggetto
funz = attrgetter(*attr)
funz(var) # restituisce var.attr
# ITEMGETTER
# restituisce oggetto chiamabile che recupera item dall'oggetto
# implementa __getitem__
funz = itemgetter(*item)
funz(var) # restituisce var[item]
# METHODCALLER
# restutiusce oggetto che chiama metodo method sull'oggetto
var = methodcaller(method, args)
var(obj) # return obj.method()
```

View file

@ -0,0 +1,60 @@
# OS Cheat Sheet
```python
os.curdir # stringa identificante cartella corrente ("." per WIN)
os.pardir # stringa identificante cartella genitore (".." per WIN)
os.sep # carattere separatore in path (\\ per WIN, / per POSIX)
os.extsep # carattere separatore estensione file (".")
os.pathsep # carattere separatore in ENVVAR PATH (";" per WIN, ":" per POSIX)
os.linesp # stringa usata per separare linee (\r\n per WIN, \n per POSIX)
os.system("command") # execute command in shell
os.remove(path) # cancella il file indicato da path
os.rmdir(path) # cancella cartella indicata da path
os.listdir(path) # restituisce lista nomi dei contenuti cartella
os.path.exists(path) # True se path si riferisce ad elemento esistente
os.path.split() # divide path in (head, tail), head + tail == path
os.path.splitdrive(path) # divide path in (drive, tail), drive + tail == path
os.path.splitext() # divide path in (root, ext), root + ext == path
os.path.dirname(path) # restituisce nome cartella (path head)
os.path.getatime(path) # restituisce data ultimo accesso
os.path.getmtime(path) # restituisce data ultima modifica
os.path.getsize(path) # restituisce dimesione in bytes (OSError se file inaccessibile o inesistente)
os.path.isfile(path) # True se path è file esistente
os.path.isdir(path) # True se path è cartella esistente
os.path.join(path, *paths) # concatena vari path
os.path.realpath(path) # Return the canonical path of the specified filename, eliminating symbolic links
os.path.relpath(path, start=os.curdir) # restituisce path relativo (start opzionale, default os.curdir)
os.path.abspath(path) # return a normalized absolutized version of the pathname path
# collapses redundant separators and up-level references so that A//B, A/B/, A/./B and A/foo/../B all become A/B
os.walk(top)
# Generate the file names in a directory tree by walking the tree either top-down or bottom-up.
# For each directory in the tree rooted at directory top (including), it yields a 3-tuple (dirpath, dirnames, filenames).
# dirpath is a string, the path to the directory.
# dirnames is a list of the names of the subdirectories in dirpath (excluding '.' and '..').
# filenames is a list of the names of the non-directory files in dirpath.
```
## Folder Operations
```python
os.getcwd() # Return a string representing the current working directory
os.chdir(path) # change the current working directory to path
os.mkdir(path, mode=0o777) # Create a directory named path with numeric mode MODE.
os.makedirs(name, mode=0o777) # Recursive directory creation
```
## Exceptions
```python
IsADirectoryError # file operation requested on directory
NotADirectoryError # directory operation requested on file
```

View file

@ -0,0 +1,55 @@
# Regex Module Cheat Sheet
Compile a regular expression pattern into a regular expression object, which can be used for matching.
```py
regex_obj = re.compile(r"") # raw string doesn't escapes special caracters
# cerca la corrispondenza con REGEX_OBJ nella stringa
match_obj = regex_obj.search(stringa) # -> oggetto Match
# cerca la corrispondenza con REGEX_OBJ all'inizio della stringa
# se non vi sono corrispondenze l'oggetto match ha valore NONE
match_obj = regex_obj.match(stringa) # -> oggetto Match
# l'intera stringa eve corrispondere a REGEX_OBJ
# se non vi sono corrispondenze l'oggetto match ha valore NONE
match_obj = regex_obj.fullmatch(stringa) # -> oggetto Match
# restituisce tutte le sottostringhe corrispondenti a REGEX_OBJ in una lista.
# In caso non ve ne siano la lista è vuota
# se nel pattern sono presenti due o più gruppi verrà restituita una lista di tuple.
# Solo il contenuto dei gruppi vine restituito.
regex_obj.findall(stringa)
# suddivide le stringhe in base a REGEX_OBJ, caratteri cercati non riportati nella lista
regex_obj.split(pattern, stringa)
# sostituisce ogni sottostringa corrispondente a REGEX_OBJ con substringa
regex_obj.sub(substringa, stringa)
```
## Match Objects
L'oggetto match contiene True/None, info sul match, info su string, REGEX usata e posizione della corrispondenza
```python
match_obj.group([number]) # restituisce la stringa individuata, [number] selezione il sottogurppo della REGEX
match_obj.groups() # Return a tuple containing all the subgroups of the match
match_obj.start() # posizione inizio corrispondenza
match_obj.end() # posizione fine corrispondenza
```
## Regex Configuration
```python
re.compile(r"", re.OPTION_1 | re.OPTION_2 | ...) # specify options
# Allows regex that are more readable by allowing visually separate logical sections of the pattern and add comments.
re.VERBOSE
# Make the '.' special character match any character at all, including a newline. Corresponds to the inline flag (?s).
re.DOTALL
re.IGNORECASE
re.MULTILINE
```

View file

@ -0,0 +1,32 @@
# Shelve Module Cheat Sheet
```python
import shelve
# open a persistent dictionary, returns a shelf object
shelf = shelve.open("filename", flag="c", writeback=False)
```
FLAG:
- r = read
- w = read & write
- c = read, wite & create (if doesen't exist)
- n = always create new
If `writeback` is `True` all entries accessed are also cached in memory, and written back on `sync()` and `close()`.
This makes it handier to mutate mutable entries in the persistent dictionary, but, if many entries are accessed, it can consume vast amounts of memory for the cache, and it can make the close operation very slow since all accessed entries are written back.
```python
# key is a string, data is an arbitrary object
shelf[key] = data # store data at key
data = shelf[key] # retrieve a COPY of data at key
shelf.keys() # list of all existing keys (slow!)
shelf.values() # lsit of all existing values
del shelf[key] # selete data stored at key
shelf.close() # Synchronize and close the persistent dict object.
# Operations on a closed shelf will fail with a ValueError.
```

View file

@ -0,0 +1,52 @@
# Shutil Module Cheat Sheet
High-level file operations
```python
# copy file src to fil dst, return dst in most efficient way
shutil.copyfile(src, dst)
# dst MUST be complete target name
# if dst already exists it will be overwritten
# copy file src to directory dst, return path to new file
shutil.copy(src, dst)
# Recursively copy entire dir-tree rooted at src to directory named dst
# return the destination directory
shutil.copytree(src, dst, dirs_exist_ok=False)
# DIRS_EXIST_OK: {bool} -- dictates whether to raise an exception in case dst
# or any missing parent directory already exists
# delete an entire directory tree
shutil.rmtree(apth, ignore_errors=False, onerror=None)
# IGNORE_ERROR: {bool} -- if true errors (failed removals) will be ignored
# ON_ERROR: handler for removal errors (if ignore_errors=False or omitted)
# recursively move file or directory (src) to dst, return dst
shutil.move(src, dst)
# if the destination is an existing directory, then src is moved inside that directory.
# if the destination already exists but is not a directory,
# it may be overwritten depending on os.rename() semantics
# used to rename files
# change owner user and/or group of the given path
shutil.chown(path, user=None, group=None)
# user can be a system user name or a uid; the same applies to group.
# At least one argument is required
# create archive file and return its name
shutil.make_archive(base_name, format, [root_dir, base_dir])
# BASE_NAME: {string} -- name of the archive, including path, excluding extension
# FROMAT: {zip, tar, gztar, bztar, xztar} -- archive fromat
# ROOT_DIR: {path} -- root directory of archive (location of archive)
# BASE_DIR: {path} -- directory where the archivation starts
# unpack an archive
shutil.unpack_archive(filename, [extarct_dir, format])
# FILENAME: full path of archive
# EXTRACT_DIR: {path} -- directory to unpack into
# FROMAT: {zip, tar, gztar, bztar, xztar} -- archive fromat
# return disk usage statistics as Namedtuple w/ attributes total, used, free
shutil.disk_usage(path)
```

View file

@ -0,0 +1,46 @@
# SMTPlib Module Cheat Sheet
```python
import smtplib
# SMTP instance that encapsulates a SMTP connection
# If the optional host and port parameters are given, the SMTP connect() method is called with those parameters during initialization.
s = smtplib.SMTP(host="host_smtp_address", port="smtp_service_port", **kwargs)
s = smtplib.SMTP_SSL(host="host_smtp_address", port="smtp_service_port", **kwargs)
# An SMTP_SSL instance behaves exactly the same as instances of SMTP.
# SMTP_SSL should be used for situations where SSL is required from the beginning of the connection
# and using starttls() is not appropriate.
# If host is not specified, the local host is used.
# If port is zero, the standard SMTP-over-SSL port (465) is used.
SMTP.connect(host='localhost', port=0)
#Connect to a host on a given port. The defaults are to connect to the local host at the standard SMTP port (25). If the hostname ends with a colon (':') followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This method is automatically invoked by the constructor if a host is specified during instantiation. Returns a 2-tuple of the response code and message sent by the server in its connection response.
SMTP.verify(address) # Check the validity of an address on this server using SMTP VRFY
SMTP.login(user="full_user_mail", password="user_password") # Log-in on an SMTP server that requires authentication
SMTP.SMTPHeloError # The server didnt reply properly to the HELO greeting
SMTP.SMTPAuthenticationError # The server didnt accept the username/password combination.
SMTP.SMTPNotSupportedError # The AUTH command is not supported by the server.
SMTP.SMTPException # No suitable authentication method was found.
SMTP.starttls(keyfile=None, certfile=None, **kwargs) # Put the SMTP connection in TLS (Transport Layer Security) mode. All SMTP commands that follow will be encrypted
# from_addr & to_addrs are used to construct the message envelope used by the transport agents. sendmail does not modify the message headers in any way.
# msg may be a string containing characters in the ASCII range, or a byte string. A string is encoded to bytes using the ascii codec, and lone \r and \n characters are converted to \r\n characters. A byte string is not modified.
SMTP.sendmail(from_addr, to_addrs, msg, **kwargs)
# from_addr: {string} -- RFC 822 from-address string
# ro_addrs: {string, list of strings} -- list of RFC 822 to-address strings
# msg: {string} -- message string
# This is a convenience method for calling sendmail() with the message represented by an email.message.Message object.
SMTP.send_message(msg, from_addr=None, to_addrs=None, **kwargs)
# from_addr: {string} -- RFC 822 from-address string
# ro_addrs: {string, list of strings} -- list of RFC 822 to-address strings
# msg: {email.message.Message object} -- message string
SMTP.quit() # Terminate the SMTP session and close the connection. Return the result of the SMTP QUIT command
```
In general, use the email packages features to construct an {email.message.EmailMEssage} message to send via send_message()
EMAIL EXAMPLES --> {https:\\docs.python.org\3\library\email.examples.html#email-examples}

View file

@ -0,0 +1,31 @@
# Socket Module CheatSheet
## Definition
A network socket is an internal endpoint for sending or receiving data within a node on a computer network.
In practice, socket usually refers to a socket in an Internet Protocol (IP) network, in particular for the **Transmission Control Protocol (TCP)**, which is a protocol for *one-to-one* connections.
In this context, sockets are assumed to be associated with a specific socket address, namely the **IP address** and a **port number** for the local node, and there is a corresponding socket address at the foreign node (other node), which itself has an associated socket, used by the foreign process. Associating a socket with a socket address is called *binding*.
## Socket Creation & Connection
```python
import socket
# socket ovet the intenet, socket is a stream of data
socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket.connect = (("URL", port: int)) # connect to socket
socket.close() # close connection
```
## Making HTTP Requests
```python
import socket
HTTP_Method = "GET hhtp://url/resource HTTP/version\n\n".encode() # set HTTP request (encoded string from UTF-8 to bytes)
socket.send(HTTP_Method) # make HTTP request
data = socket.recv(buffer_size) # recieve data from socket
decoded = data.decode() # decode data (from bytes to UTF-8)
```

View file

@ -0,0 +1,95 @@
# sqlite3 Module CheatSheet
## Connecting To The Database
To use the module, you must first create a Connection object that represents the database.
```python
import sqlite3
connection = sqlite3.connect("file.db")
```
Once you have a `Connection`, you can create a `Cursor` object and call its `execute()` method to perform SQL commands.
```python
cursor = connection.cursor()
cursor.execute(sql)
executemany(sql, seq_of_parameters) # Executes an SQL command against all parameter sequences or mappings found in the sequence seq_of_parameters.
cursor.close() # close the cursor now
# ProgrammingError exception will be raised if any operation is attempted with the cursor.
```
The data saved is persistent and is available in subsequent sessions.
### Query Construction
Usually your SQL operations will need to use values from Python variables.
You shouldnt assemble your query using Pythons string operations because doing so is insecure: it makes your program vulnerable to an [SQL injection attack](https://en.wikipedia.org/wiki/SQL_injection)
Put `?` as a placeholder wherever you want to use a value, and then provide a _tuple of values_ as the second argument to the cursors `execute()` method.
```python
# Never do this -- insecure!
c.execute("SELECT * FROM stocks WHERE symbol = value")
# Do this instead
t = ('RHAT',)
c.execute('SELECT * FROM stocks WHERE symbol=?', t)
print(c.fetchone())
# Larger example that inserts many records at a time
purchases = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]
c.executemany('INSERT INTO stocks VALUES (?,?,?,?,?)', purchases)
```
### Writing Operations to Disk
```python
cursor = connection.cursor()
cursor.execute("SQL")
connection.commit()
```
### Multiple SQL Instructions
```python
conection = sqlite3.connect("file.db")
cur = con.cursor()
cur.executescript("""
QUERY_1;
QUERY_2;
...
QUERY_N;
""")
con.close()
```
### Retrieving Records
```python
# Fetches the next row of a query result set, returning a single sequence.
# Reruens None when no more data is available.
cursor.fetchone()
# Fetches all (remaining) rows of a query result, returning a list.
# An empty list is returned when no rows are available.
cursor.fetchall()
# Fetches the next set of rows of a query result, returning a list.
# An empty list is returned when no more rows are available.
fetchmany(size=cursor.arraysize)
```
The number of rows to fetch per call is specified by the `size` parameter. If it is not given, the cursors `arraysize` determines the number of rows to be fetched.
The method should try to fetch as many rows as indicated by the size parameter.
If this is not possible due to the specified number of rows not being available, fewer rows may be returned.
Note there are performance considerations involved with the size parameter.
For optimal performance, it is usually best to use the arraysize attribute.
If the size parameter is used, then it is best for it to retain the same value from one `fetchmany()` call to the next.

View file

@ -0,0 +1,36 @@
# String Module Cheat Sheet
## TEMPLATE STRINGS
Template strings support $-based substitutions, using the following rules:
`$$` is an escape; it is replaced with a single `$`.
`$identifier` names a substitution placeholder matching a mapping key of "identifier".
By default, "identifier" is restricted to any case-insensitive ASCII alphanumeric string (including underscores) that starts with an underscore or ASCII letter.
The first non-identifier character after the $ character terminates this placeholder specification.
`${identifier}` is equivalent to `$identifier`.
It is required when valid identifier characters follow the placeholder but are not part of the placeholder.
Any other appearance of `$` in the string will result in a `ValueError` being raised.
The string module provides a Template class that implements these rules.
```python
from string import Template
# The methods of Template are:
string.Template(template) # The constructor takes a single argument which is the template string.
substitute(mapping={}, **kwargs)
# Performs the template substitution, returning a new string.
# mapping is any dictionary-like object with keys that match the placeholders in the template.
# Alternatively, you can provide keyword arguments, where the keywords are the placeholders.
# When both mapping and kwds are given and there are duplicates, the placeholders from kwds take precedence.
safe_substitute(mapping={}, **kwargs)
# Like substitute(), except that if placeholders are missing from mapping and kwds,
# instead of raising a KeyError exception, the original placeholder will appear in the resulting string intact.
# Also, unlike with substitute(), any other appearances of the $ will simply return $ instead of raising ValueError.
# While other exceptions may still occur, this method is called “safe” because it always tries to return a usable string instead of raising an exception.
# In another sense, safe_substitute() may be anything other than safe, since it will silently ignore malformed templates containing dangling delimiters, unmatched braces, or placeholders that are not valid Python identifiers.
```

View file

@ -0,0 +1,64 @@
# Time & Datetime Modules Cheatsheet
## Time
```py
# epoch: tempo in secondi trascorso (in UNIX parte da 01-010-1970)
import time # UNIX time
variabile = time.time() # restituisce il tempo (In secondi) trascorso da 01-01-1970
variabile = time.ctime(epochseconds) # traforma l'epoca in data
var = time.perf_counter() # ritorna il tempo di esecuzione attuale
# tempo di esecuzione = tempo inizio - tempo fine
```
### time.srtfrime() format
| Format | Data |
|--------|------------------------------------------------------------------------------------------------------------|
| `%a` | Locales abbreviated weekday name. |
| `%A` | Locales full weekday name. |
| `%b` | Locales abbreviated month name. |
| `%B` | Locales full month name. |
| `%c` | Locales appropriate date and time representation. |
| `%d` | Day of the month as a decimal number `[01,31]`. |
| `%H` | Hour (24-hour clock) as a decimal number `[00,23]`. |
| `%I` | Hour (12-hour clock) as a decimal number `[01,12]`. |
| `%j` | Day of the year as a decimal number `[001,366]`. |
| `%m` | Month as a decimal number `[01,12]`. |
| `%M` | Minute as a decimal number `[00,59]`. |
| `%p` | Locales equivalent of either AM or PM. |
| `%S` | Second as a decimal number `[00,61]`. |
| `%U` | Week number of the year (Sunday as the first day of the week) as a decimal number `[00,53]`. |
| `%w` | Weekday as a decimal number `[0(Sunday),6]`. |
| `%W` | Week number of the year (Monday as the first day of the week) as a decimal number `[00,53]`. |
| `%x` | Locales appropriate date representation. |
| `%X` | Locales appropriate time representation. |
| `%y` | Year without century as a decimal number `[00,99]`. |
| `%Y` | Year with century as a decimal number. |
| `%z` | Time zone offset indicating a positive or negative time difference from UTC/GMT of the form +HHMM or -HHMM |
| `%Z` | Time zone name (no characters if no time zone exists). |
| `%%` | A literal `%` character. |
## Datetime
```py
import datetime
today = datetime.date.today() # restituisce data corrente
today = datetime.datetime.today() # restituisce la data e l'ora corrente
# esempio di formattazione
print('Curent Date: {}-{}-{}' .format(today.day, today.month, today.year))
print('Current Time: {}:{}.{}' .format(today.hour, today.minute, today.second))
var_1 = datetime.date(anno, mese, giorno) # crea oggetto data
var_2 = datetime.time(ora, minuti, secondi, micro-secondi) # crea oggetto tempo
dt = datetime.combine(var_1, var_2) # combina gli oggetti data e tempo in un unico oggetto
date_1 = datetieme.date('year', 'month', 'day')
date_2 = date_1.replace(year = 'new_year')
#DATETIME ARITHMETIC
date_1 - date_2 # -> datetime.timedelta(num_of_days)
datetime.timedelta # durata esprimente differenza tra due oggetti date, time o datetime
```

View file

@ -0,0 +1,68 @@
# Unittest Module
Permette di testare il propio codice e controllare se l'output corrisponde a quello desiderato.
```py
import unittest
import modulo_da_testare
class Test(unittest.TestCase): # eredita da unittest.TestCase
# testa se l'output è corretto con un asserzione
def test_1(self):
# code here
self.assert*(output, expected_output)
if __name__ == '__main__':
unittest.main()
```
## TestCase Class
Instances of the `TestCase` class represent the logical test units in the unittest universe. This class is intended to be used as a base class, with specific tests being implemented by concrete subclasses. This class implements the interface needed by the test runner to allow it to drive the tests, and methods that the test code can use to check for and report various kinds of failure.
### Assert Methods
| Method | Checks that |
|-----------------------------|------------------------|
| `assertEqual(a, b)` | `a == b` |
| `assertNotEqual(a, b)` | `a != b` |
| `assertTrue(x)` | `bool(x) is True` |
| `assertFalse(x)` | `bool(x) is False` |
| `assertIs(a, b)` | `a is b` |
| `assertIsNot(a, b)` | `a is not b` |
| `assertIsNone(x)` | `x is None` |
| `assertIsNotNone(x)` | `x is not None` |
| `assertIn(a, b)` | `a in b` |
| `assertNotIn(a, b)` | `a not in b` |
| `assertIsInstance(a, b)` | `isinstance(a, b)` |
| `assertNotIsInstance(a, b)` | `not isinstance(a, b)` |
| Method | Checks that |
|-------------------------------------------------|---------------------------------------------------------------------|
| `assertRaises(exc, fun, *args, **kwds)` | `fun(*args, **kwds)` raises *exc* |
| `assertRaisesRegex(exc, r, fun, *args, **kwds)` | `fun(*args, **kwds)` raises *exc* and the message matches regex `r` |
| `assertWarns(warn, fun, *args, **kwds)` | `fun(*args, **kwds)` raises warn |
| `assertWarnsRegex(warn, r, fun, *args, **kwds)` | `fun(*args, **kwds)` raises warn and the message matches regex *r* |
| `assertLogs(logger, level)` | The with block logs on logger with minimum level |
| Method | Checks that |
|------------------------------|-------------------------------------------------------------------------------|
| `assertAlmostEqual(a, b)` | `round(a-b, 7) == 0` |
| `assertNotAlmostEqual(a, b)` | `round(a-b, 7) != 0` |
| `assertGreater(a, b)` | `a > b` |
| `assertGreaterEqual(a, b)` | `a >= b` |
| `assertLess(a, b)` | `a < b` |
| `assertLessEqual(a, b)` | `a <= b` |
| `assertRegex(s, r)` | `r.search(s)` |
| `assertNotRegex(s, r)` | `not r.search(s)` |
| `assertCountEqual(a, b)` | a and b have the same elements in the same number, regardless of their order. |
| Method | Used to compare |
|------------------------------|--------------------|
| `assertMultiLineEqual(a, b)` | strings |
| `assertSequenceEqual(a, b)` | sequences |
| `assertListEqual(a, b)` | lists |
| `assertTupleEqual(a, b)` | tuples |
| `assertSetEqual(a, b)` | sets or frozensets |
| `assertDictEqual(a, b)` | dicts |

View file

@ -0,0 +1,43 @@
# Urllib Module Cheatsheet
## Module Structure
`urllib` is a package that collects several modules for working with URLs:
- `urllib.request` for opening and reading URLs
- `urllib.error` containing the exceptions raised by urllib.request
- `urllib.parse` for parsing URLs
- `urllib.robotparser` for parsing robots.txt files
## urllib.request
### Opening an URL
```python
import urllib.request
# HTTP request header are not returned
response = urllib.request.urlopen(url)
data = response.read().decode()
```
### Readign Headers
```python
response = urllib.request.urlopen(url)
headers = dict(response.getheaders()) # store headers as a dict
```
## urllib.parse
### URL Encoding
Encode a query in a URL
```python
url = "http://www.addres.x/_?"
# encode an url with passed key-value pairs
encoded = url + urllib.parse.encode( {"key": value} )
```

View file

@ -0,0 +1,26 @@
# XML Module CheatSheet
## Submodules
The XML handling submodules are:
- `xml.etree.ElementTree`: the ElementTree API, a simple and lightweight XML processor
- `xml.dom`: the DOM API definition
- `xml.dom.minidom`: a minimal DOM implementation
- `xml.dom.pulldom`: support for building partial DOM trees
- `xml.sax`: SAX2 base classes and convenience functions
- `xml.parsers.expat`: the Expat parser binding
## xml.etree.ElementTree
```python
import xml.etree.ElementTree as ET
data = "<xml/>"
tree = ET.fromstring(data) # parse string containing XML
tree.find("tag").text # return data contained between <tag></tag>
tree.find("tag").get("attribute") # return value of <tag attrubute="value">
tree.findall("tag1/tag2") # list of tag2 inside tag1
```