Merge branch 'main' into dotnet/net-7

This commit is contained in:
Marcello 2022-06-06 19:02:08 +02:00
commit 8c6c9ac4a4
134 changed files with 554 additions and 1500 deletions

862
docs/C++/cpp.md Normal file
View file

@ -0,0 +1,862 @@
# C/C++
## Naming convention
C++ element | Case
-------------|------------
class | PascalCase
variable | camelCase
method | camelCase
## Library Import
`#include <iostream>`
C++ libs encapsulate C libs.
A C library can be used with the traditional name `<lib.h>` or with the prefix _**c**_ and without `.h` as `<clib>`.
### Special Operators
Operator | Operator Name
-----------|----------------------------------------------------
`::` | global reference operator
`&` | address operator (returns a memory address)
`*` | deferentiation operator (returns the pointed value)
### Constant Declaration
```cpp
#define constant_name value
const type constant_name = value;
```
### Console pausing before exit
```cpp
#include <cstdlib>
system("pause");
getchar(); // waits input from keyboard, if is last instruction will prevent closing console until satisfied
```
### Namespace definition
Can be omitted and replaced by namespace`::`
`using namespace <namespace>;`
### Main Function
```cpp
int main() {
//code here
return 0;
}
```
### Variable Declaration
```cpp
type var_name = value; //c-like initialization
type var_name (value); //constructor initialization
type var_name {value}; //uniform initialization
type var_1, var_2, ..., var_n;
```
### Type Casting
`(type) var;`
`type(var);`
### Variable Types
Type | Value Range | Byte
-------------------------|-----------------------------------|------
`short` | -32768 to 32765 | 1
`unsigned short` | 0 to 65535 | 1
`int` | -2147483648 to 2147483647 | 4
`unsigned int` | 0 to 4294967295 | 4
`long` | -2147483648 to 2147483647 | 4
`unsigned long` | 0 to 4294967295 | 4
`long long` | | 8
`float` | +/- 3.4e +/- 38 (~7 digits) | 4
`double` | +/- 1.7e +/- 308 (~15 digits) | 8
`long double` | | 16 (?)
Type | Value
-------------------------|-----------------------------
`bool` | true or false
`char` | ascii characters
`string` | sequence of ascii characters
`NULL` | empty value
### Integer Numerals
Example | Type
---------|------------------------
`75` | decimal
`0113` | octal (zero prefix)
`0x4` | hexadecimal (0x prefix)
`75` | int
`75u` | unsigned int
`75l` | long
`75ul` | unsigned long
`75lu` | unsigned long
### FLOATING POINT NUMERALS
Example | Type
------------|-------------
`3.14159L` | long double
`60.22e23f` | float
Code | Value
----------|---------------
`3.14159` | 3.14159
`6.02e23` | 6.022 * 10^23
`1.6e-19` | 1.6 * 10^-19
`3.0` | 3.0
### Character/String Literals
`'z'` single character literal
`"text here"` string literal
### Special Characters
Escape Character | Character
-------------------|-----------------------------
`\n` | newline
`\r` | carriage return
`\t` | tab
`\v` | vertical tab
`\b` | backspace
`\f` | form feed
`\a` | alert (beep)
`\'` | single quote (')
`\"` | double quote (")
`\?` | question mark (?)
`\\` | backslash (\)
`\0` | string termination character
### Screen Output
```cpp
cout << expression; // print line on screen (no automatic newline)
cout << expression_1 << expression_2; // concatenation of outputs
cout << expression << "\n"; // print line on screen
cout << expression << endl; // print line on screen
//Substitutes variable to format specifier
#include <stdio.h>
printf("text %<fmt_spec>", variable); // has problems, use PRINTF_S
printf_s("text %<fmt_spec>", variable);
```
### Input
```cpp
#include <iostream>
cin >> var; //space terminates value
cin >> var_1 >> var_2;
//if used after cin >> MUST clear buffer with cin.ignore(), cin.sync() or std::ws
getline(stream, string, delimiter) //read input from stream (usually CIN) and store in in string, a different delimiter character can be set.
#include <stdio.h>
scanf("%<fmt_spec>", &variable); // has problems, use SCANF_S
scanf_s("%<fmt_spec>", &variable); //return number of successfully accepted inputs
```
### Format Specifiers %[width].[length][specifier]
Specifier | Specified Format
------------|-----------------------------------------
`%d`, `%i` | singed decimal integer
`%u` | unsigned decimal integer
`%o` | unsigned octal
`%x` | unsigned hexadecimal integer
`%X` | unsigned hexadecimal integer (UPPERCASE)
`%f` | decimal floating point (lowercase)
`%F` | decimal floating point (UPPERCASE)
`%e` | scientific notation (lowercase)
`%E` | scientific notation (UPPERCASE)
`%a` | hexadecimal floating point (lowercase)
`%A` | hexadecimal floating point (UPPERCASE)
`%c` | character
`%s` | string
`%p` | pointer address
### CIN input validation
```cpp
if (cin.fail()) // if cin fails to get an input
{
cin.clear(); // reset cin status (modified by cin.fail() ?)
cin.ignore(n, '\n'); //remove n characters from budder or until \n
//error message here
}
if (!(cin >> var)) // if cin fails to get an input
{
cin.clear(); // reset cin status (modified by cin.fail() ?)
cin.ignore(n, '\n'); //remove n characters from budder or until \n
//error message here
}
```
### Cout Format Specifier
```cpp
#include <iomanip>
cout << stew(print_size) << setprecision(num_digits) << var; //usage
setbase(base) //set numeric base [dec, hex, oct]
setw(print_size) //set the total number of characters to display
setprecision(num_digits) //sets the number of decimal digits to display
setfill(character) //use character to fill space between words
```
### Arithmetic Operators
Operator | Operation
---------|---------------
a `+` b | sum
a `-` b | subtraction
a `*` b | multiplication
a `/` b | division
a `%` b | modulo
a`++` | increment
a`--` | decrement
### Comparison Operators
Operator | Operation
---------|--------------------------
a `==` b | equal to
a `!=` b | not equal to
a `>` b | greater than
a `<` b | lesser than
a `>=` b | greater than or equal to
a `<=` b | lesser than or equal to
### Logical Operator
Operator | Operation
---------------------|-----------------------
`!`a, `not` a | logical negation (NOT)
a `&&` b, a `and` b | logical AND
a `||` b, a `or` b | logical OR
### Conditional Ternary Operator
`condition ? result_1 : result_2`
If condition is true evaluates to result_1, and otherwise to result_2
### Bitwise Operators
Operator | Operation
-----------------------|---------------------
`~`a, `compl` a | bitwise **NOT**
a `&` b, a `bitand` b | bitwise **AND**
a `|` b, a `bitor` b | bitwise **OR**
a `^` b, a `xor` b, | bitwise **XOR**
a `<<` b | bitwise left shift
a `>>` b | bitwise right shift
### Compound Assignment O
Operator | Operation
------------|------------
a `+=` b | a = a + b
a `-=` b | a = a - b
a `*=` b | a = a * b
a `/=` b | a = a / b
a `%=` b | a = a % b
a `&=` b | a = a & b
a `|=` b | a = a | b
a `^=` b | a = a ^ b
a `<<=` b | a = a << b
a `>>=` b | a = a >> b
### Operator Precedence
1. `!`
2. `*`, `/`, `%`
3. `+`, `-`
4. `<`, `<=`, `<`, `>=`
5. `==`, `!=`
6. `&&`
7. `||`
8. `=`
### Mathematical Functions
```cpp
#include <cmath>
abs(x); // absolute value
labs(x); //absolute value if x is long, result is long
fabs(x); //absolute value if x i float, result is float
sqrt(x); // square root
ceil(x); // ceil function (next integer)
floor(x); // floor function (integer part of x)
log(x); // natural log of x
log10(x); // log base 10 of x
exp(x); // e^x
pow(x, y); // x^y
sin(x);
cos(x);
tan(x);
asin(x); //arcsin(x)
acos(x); //arccos(x)
atan(x); //arctan(x)
atan2(x, y); //arctan(x / y)
sinh(x); //hyperbolic sin(x)
cosh(x); //hyperbolic cos(x)
tanh(x); //hyperbolic tan(X)
```
### Character Classification
```cpp
isalnum(c); //true if c is alphanumeric
isalpha(c); //true if c is a letter
isdigit(c); //true if char is 0 1 2 3 4 5 6 7 8 9
iscntrl(c); //true id c is DELETE or CONTROL CHARACTER
isascii(c); //true if c is a valid ASCII character
isprint(c); //true if c is printable
isgraph(c); //true id c is printable, SPACE excluded
islower(c); //true if c is lowercase
isupper(c); //true if c is uppercase
ispunct(c); //true if c is punctuation
isspace(c); //true if c is SPACE
isxdigit(c); //true if c is HEX DIGIT
```
### Character Functions
```cpp
#include <ctype.n>
tolower(c); //transforms character in lowercase
toupper(c); //transform character in uppercase
```
### Random Numbers Between max-min (int)
```cpp
#include <time>
#include <stdlib.h>
srand(time(NULL)); //initialize seed
var = rand() //random number
var = (rand() % max + 1) //random numbers between 0 & max
var = (rand() % (max - min + 1)) + min //random numbers between min & max
```
### Flush Output Buffer
```cpp
#include <stdio.h>
fflush(FILE); // empty output buffer end write its content on argument passed
```
**Do not use stdin** to empty INPUT buffers. It's undefined C behaviour.
## STRINGS (objects)
```cpp
#include <string>
string string_name = "string_content"; //string declaration
string string_name = string("string_content"); // string creation w/ constructor
string.length //returns the length of the string
//if used after cin >> MUST clear buffer with cin.ignore(), cin.sync() or std::ws
getline(source, string, delimiter); //string input, source can be input stream (usually cin)
printf_s("%s", string.c_str()); //print the string as a char array, %s --> char*
string_1 + string_2; // string concatenation
string[pos] //returns char at index pos
```
### String Functions
```cpp
string.c_str() //transforms the string in pointer to char[] (char array aka C string) terminated by '\0'
strlen(string); //return length (num of chars) of the string
strcat(destination, source); //appends chars of string2 to string1
strncat(string1, string2, nchar); //appends the first n chars of string 2 to string1
strcpy(string1, string2.c_str()); //copies string2 into string1 char by char
strncpy(string1, string2, n); //copy first n chars from string2 to string1
strcmp(string1, string2); //compares string1 w/ string2
strncmp(string1, string2, n); //compares first n chars
//returns < 0 if string1 precedes string2
//returns 0 if string1 == string2
// returns > 0 if string1 succeeds string2
strchr(string, c); //returns index of c in string if it exists, NULL otherwise
strstr(string1, string2); //returns pointer to starting index of string1 in string2
strpbrk(string, charSet); //Returns a pointer to the first occurrence of any character from strCharSet in str, or a NULL pointer if the two string arguments have no characters in common.
```
### String Conversion
```cpp
atof(string); //converts string in double if possible
atoi(string); //converts string in integer if possible
atol(string); //converts string in long if possible
```
### String Methods
```C++
string.at(pos); // returns char at index pos
string.substr(start, end); // returns substring between indexes START and END
string.c_str(); //reads string char by char
string.find(substring); // The zero-based index of the first character in string object that matches the requested substring or characters
```
## VECTORS
```cpp
#include <vector>
vector<type> vector_name = {values}; //variable length array
```
## Selection Statements
### Simple IF
```cpp
if (condition)
//single instruction
if (condition) {
//code here
}
```
### Simple IF-ELSE
```cpp
if (condition) {
//code here
} else {
//code here
}
```
## IF-ELSE multi-branch
```cpp
if (condition) {
//code here
} else if (condition) {
//code here
} else {
//code here
}
```
### Switch
```cpp
switch (expression) {
case constant_1:
//code here
break;
case constant_2:
//code here
break;
default:
//code here
}
```
## Loop Statements
### While Loop
```cpp
while (condition) {
//code here
}
```
### Do While
```cpp
do {
//code here
} while (condition);
```
### For Loop
```cpp
for (initialization; condition; increase) {
//code here
}
```
### Range-Based For Loop
```cpp
for (declaration : range) {
//code here
}
```
### Break Statement
`break;` leaves a loop, even if the condition for its end is not fulfilled.
### Continue Statement
`continue;` causes the program to skip the rest of the loop in the current iteration.
## Functions
Functions **must** be declared **before** the main function.
It is possible to declare functions **after** the main only if the *prototype* is declared **before** the main.
To return multiple variables those variables can be passed by reference so that their values is adjourned in the main.
### Function Prototype (before main)
`type function_name(type argument1, ...);`
### Standard Function
```cpp
type functionName (parameters) { //parametri formali aka arguments
//code here
return <expression>;
}
```
### Void Function (aka procedure)
```cpp
void functionName (parameters) {
//code here
}
```
### Arguments passed by reference without pointers
Passing arguments by reference causes modifications made inside the function to be propagated to the values outside.
Passing arguments by values copies the values to the arguments: changes remain inside the function.
```cpp
type functionName (type &argument1, ...) {
//code here
return <expression>;
}
```
`functionName (arguments);`
### Arguments passed by reference with pointers
Passing arguments by reference causes modifications made inside the function to be propagated to the values outside.
Passing arguments by values copies the values to the arguments: changes remain inside the function.
```cpp
type function_name (type *argument_1, ...) {
instructions;
return <expression>;
}
```
`function_name (&argument_1, ...);`
## Arrays
```cpp
type arrayName[dimension]; //array declaration
type arrayName[dimension] = {value1, value2, ...}; //array declaration & initialization, values number must match dimension
array[index] //item access, index starts at 0 (zero)
array[index] = value; //value assignment at position index
```
## String as array of Chars
```cpp
char string[] = "text"; //converts string in char array, string length determines dimension of the array
string str = string[] //a array of chars is automatically converted to a string
```
## Array as function parameter
The dimension is not specified because it is determined by the passed array.
The array is passed by reference.
```cpp
type function(type array[]){
//code here
}
//array is not modifiable inside the function (READ ONLY)
type function(const type array[]){
//code here
}
function(array); //array passed w/out square brackets []
```
### Multi-Dimensional Array (Matrix)
```cpp
type matrix[rows][columns];
matrix[i][j] //element A_ij of the matrix
```
### Matrix as function parameter
```cpp
//matrix passed by reference, second dimension is mandatory
type function(type matrix[][columns]){
//code here
};
//matrix values READ ONLY
type function(const type matrix[][columns]){
//code here
}
type function(type matrix[][dim2]...[dimN]){
//code here
}
```
## Record (struct)
List of non homogeneous items
### Struct Definition (before functions, outside main)
```cpp
struct structName {
type field1;
type field2;
type field3;
type field4;
};
structName variable; //STRUCT variable
variable.field //field access
```
## Pointers
Pointers hold memory addresses of declared variables, they should be initialized to NULL.
```cpp
type *pointer = &variable; //pointer init and assignment
type *pointer = NULL;
type *pointer = otherPointer;
type **pointerToPointer = &pointer; // pointerToPointer -> pointer -> variable
```
`&variable` extracts the address, the pointer holds the address of the variable.
pointer type and variable type **must** match.
(*) --> "value pointed to by"
```cpp
pointer //address of pointed value (value of variable)
*pointer //value of pointed variable
**pointer //value pointed by *pointer (pointer to pointer)
```
### Pointer to array
```cpp
type *pointer;
type array[dim] = {};
pointer = array; //point to array (pointer points to first "cell" of array)
pointer++; //change pointed value to successive "cell" of array
```
### Pointers, Arrays & Functions
```cpp
func(array) //pass entire array to function (no need to use (&) to extract address)
type func(type* array){
array[index] //access to item of array at index
}
```
### Pointer to Struct
```cpp
(*structPointer).field //access to field value
structPointer->structField //access to field value
```
## Dynamic Structures
Dynamic structures are structures without a fixed number of items.
Every item in a dynamic structure is called **node**.
Every node is composed by two parts:
* the value (item)
* pointer to successive node
**Lists** are *linear* dynamic structures in which is only defined the preceding and succeeding item. A List is a group of homogeneous items (all of the same type).
**Trees**, **Graphs** are non *linear* dynamic structures in which an item cha have multiple successors.
### Stack
A **Stack** is a list in with nodes can be extracted from one *side* only (*LIFO*).
The extraction of an item from the *top* is called **pop**
```cpp
// node structure
struct Node {
type value;
stack *next;
}
```
#### Node Insertion
```cpp
Node *stackNode; //current node
Node* head = NULL; //pointer to head of stack
int nodeValue;
//assign value to nodeValue
stackNode = new Node; //create new node
stackNode->value = nodevalue; //valorize node
stackNode->next = head; //update node pointer to old head adding it to the stack
head = stackNode; //update head to point to new first node
```
#### Node Deletion
```cpp
stackNode = head->next; //memorize location of second node
delete head; //delete first node
head = stackNode; //update head to point to new first node
```
#### Passing Head To Functions
```cpp
type function(Node** head) //value of head passed by address (head is Node*)
{
*head = ... //update value of head (pointed variable/object/Node)
}
```
### Queue
A **Queue** is a list in which nodes enter from one side and can be extracted only from the other side (*FIFO*).
### Linked List
A **Linked List** is list in which nodes can be extracted from each side and from inside the linked list.
Linked lists can be *linear*, *circular* or *bidirectional*.
In circular linked lists the last node points to the first.
Nodes of bidirectional linked lists are composed by three parts:
* the value (item)
* pointer to successive node
* pointer to previous item
Thus the first and last node will have a component empty since they only point to a single node.
### Dynamic Memory Allocation
C/C++ does not automatically free allocated memory when nodes are deleted. It must be done manually.
In **C++**:
* `new` is used to allocate memory dynamically.
* `delete` is use to free the dynamically allocated memory.
In **C**:
* `malloc()` returns a void pointer if the allocation is successful.
* `free()` frees the memory
```C
list *pointer = (list*)malloc(sizeof(list)); //memory allocation
free(pointer) //freeing of memory
```
`malloc()` returns a *void pointer* thus the list must be casted to a void type with `(list*)`
## Files
The object oriented approach is based on the use of *streams*.
A **Stream** can be considered a stream of data that passes sequentially from a source to a destination.
The available classes in C++ to operate on files are:
* `ifstream` for the input (reading)
* `ofstream` for the output (writing)
* `fstream` for input or output
### File Opening
Filename can be string literal or CharArray (use `c_str()`).
```cpp
ifstream file;
file.open("filename"); //read from file
ofstream file;
file.open("filename"); //write to file
fstream file;
file.open("filename", ios::in); //read form file
file.open("filename", ios::out); //write to file
file.open("filename", ios::app); //append to file
file.open("filename", ios::trunc); //overwrite file
file.open("filename", ios::nocreate); //opens file only if it exists, error otherwise. Does not create new file
file.open("filename", ios::noreplace); //opens file only if it not exists, error otherwise. If it not exists the file is created.
file.open("filename", ios::binary); //opens file in binary format
```
If file opening fails the stream has value 0, otherwise the value is the assigned memory address.
Opening modes can be combined with the OR operator: `ios::mode | ios::mode`.
### File Reading & Writing
To write to and read from a file the `>>` and `<<` operators are used.
```cpp
file.open("filename", ios::in | ios::out);
file << value << endl; //write to file
string line;
do {
getline(file, line); //read file line by line
//code here
} while (!file.eof()); // or !EOF
```
### Stream state & Input errors
Once a stream is in a **state of error** it will remain so until the status flags are *explicitly resetted*. The input operations on such a stream are *void* until the reset happens.
To clear the status of a stream the `clear()` method is used.
Furthermore, when an error on the stream happens **the stream is not cleared** of it's characters contents.
To clear the stream contents the `ignore()` method is used.

201
docs/bash/commands.md Normal file
View file

@ -0,0 +1,201 @@
# Bash Commands
**NOTE**: Square brackets (`[]`) denotes optional commands/flags/arguments. Uppercase denotes placeholders for arguments.
## Basic Commands
### Elevated Privileges and Users
[sudo vs su](https://unix.stackexchange.com/questions/35338/su-vs-sudo-s-vs-sudo-i-vs-sudo-bash/35342)
```bash
sudo su # login as root (user must be sudoers, root password not required) DANGEROUS
sudo -s # act as root and inherit current user environment (env as is now, along current dir and env vars) SAFE (can modify user environment)
sudo -i # act as root and and use a clean environment (goes to user's home, runs .bashrc) SAFEST
sudo COMMAND # run a command w\ root permissions
sudo -u USER COMMAND # run command as user
su # become root (must know root password) DANGEROUS
su - USER # change user and load it's home folder
su USER # change user but don't load it's home folder
```
### Getting Info
```sh
man COMMAND # show command manual
help COMMAND # show command info
whatis COMMAND # one-line command explanation
apropos COMMAND # search related commands
which COMMAND # locate a command
history # list of used commands
id # Print user and group information for the specified USER, or (when USER omitted) for the current user
```
### Moving & Showing Directory Contents
```sh
pwd # print working (current) directory
ls [option]... [FILE]... # list directory contents ("list storage")
cd rel_path # change directory to path (rel_path must be inside current directory)
cd abs_path # change directory to path
cd .. # change directory to parent directory
cd ~ # go to /home
cd - # go to previous directory
pushd PATH # go from current directory to path
popd # return to previous directory (before pushd)
```
### Creating, Reading, Copying, Moving, Modifying Files And Directories
```sh
touch FILE # change FILE timestamp fi exists, create file otherwise
cat [FILE] # concatenate files and print on standard output (FD 1)
cat >> FILE # append following content ot file (Ctrl+D to stop)
file FILE # discover file extension and format
stat FILE # display file or file system status
tail # output the last part of a file
tail [-nNUM] # output the last NUM lines
more # filter for paging through text one screenful at a time
less # opposite of more (display big file in pages), navigate with arrow keys or space bar
cut # remove sections from each line of files
cut -[d --delimiter=DELIM] # use DELIM instead of TAB for field delimiter
cut [-f --fields=LIST] # select only these fields
df # report file system disk space usage
rm FILE # remove file or directories
rm DIRECTORY -r # remove directory an all its contents (recursive)
rmdir DIRECTORY # remove directory only if is empty
mkdir DIRECTORY # make directories
mv SOURCE DESTINATION # move or rename files
mv SOURCE DIRECTORY # move FILE to DIRECTORY
cp SOURCE DESTINATION # copy SOURCE to DESTINATION
```
### Files Permissions & Ownership
![Linux Permissions](../img/bash_files-permissions-and-ownership-basics-in-linux.png "files info and permissions")
```sh
chmod MODE FILE # change file (or directory) permissions
chmod OCTAL-MODE FILE # change file (or directory) permissions
chown [OPTION]... [OWNER][:[GROUP]] FILE... # change file owner and group
chgrp [OPTION]... GROUP FILE... # change group ownership
```
**File Permissions**:
- `r`: Read. Can see file content
- `w`: Write. Can modify file content
- `x`: Execute. Can execute file
**Directory Permissions**:
- `r`: Read. Can see dir contents
- `w`: CRUD. Can create, rename and delete files
- `x`: Search. Can access and navigate inside the dir. Necessary to operate on files
***Common* Octal Files Permissions**:
- `777`: (`rwxrwxrwx`) No restrictions on permissions. Anybody may do anything. Generally not a desirable setting.
- `755`: (`rwxr-xr-x`) The file's owner may read, write, and execute the file. All others may read and execute the file. This setting is common for programs that are used by all users.
- `700`: (`rwx------`) The file's owner may read, write, and execute the file. Nobody else has any rights. This setting is useful for programs that only the owner may use and must be kept private from others.
- `666`: (`rw-rw-rw-`) All users may read and write the file.
- `644`: (`rw-r--r--`) The owner may read and write a file, while all others may only read the file. A common setting for data files that everybody may read, but only the owner may change.
- `600`: (`rw-------`) The owner may read and write a file. All others have no rights. A common setting for data files that the owner wants to keep private.
***Common* Octal Directory Permissions**:
- `777`: (`rwxrwxrwx`) No restrictions on permissions. Anybody may list files, create new files in the directory and delete files in the directory. Generally not a good setting.
- `755`: (`rwxr-xr-x`) The directory owner has full access. All others may list the directory, but cannot create files nor delete them. This setting is common for directories that you wish to share with other users.
- `700`: (`rwx------`) The directory owner has full access. Nobody else has any rights. This setting is useful for directories that only the owner may use and must be kept private from others.
### Finding Files And Directories
```sh
find [path] [expression] # search file in directory hierarchy
find [start-position] -type f -name FILENAME # search for a file named "filename"
find [start-position] -type d -name DIRNAME # search for a directory named "dirname"
find [path] -exec <command> {} \; # execute command on found items (identified by {})
[[ -f "path" ]] # test if a file exists
[[ -d "path" ]] # test if a folder exists
[[ -L "path" ]] # test if is symlink
```
### Other
```sh
tee # copy standard input and write to standard output AND files simultaneously
tee [FILE]
command | sudo tee FILE # operate on file w/o using shell as su
echo # display a line of text
echo "string" > FILE # write lin of text to file
echo "string" >> FILE # append line of text to end of file (EOF)
wget URL # download repositories to linux machine
curl # download the contents of a URL
curl [-I --head] # Fetch the headers only
ps [-ax] # display processes
kill <PID> # kill process w/ Process ID <PID>
killall PROCESS # kill process by nane
grep # search through a string using a REGEX
grep [-i] # grep ignore case
source script.sh # load script as a command
diff FILES # compare files line by line
# sudo apt install shellcheck
shellcheck FILE # shell linter
xargs [COMMAND] # build and execute command lines from standard input
# xargs reads items form the standard input, delimited by blanks or newlines, and executes the COMMAND one or more times with the items as arguments
watch [OPTIONS] COMMAND # execute a program periodically, showing output full-screen
watch -n SECONDS COMMAND # execute command every SECONDS seconds (no less than 0.1 seconds)
```
## Data Wrangling
**Data wrangling** is the process of transforming and mapping data from one "raw" data form into another format with the intent of making it more appropriate and valuable for a variety of downstream purposes such as analytics.
```bash
sed # stream editor for filtering and transforming text
sed -E "s/REGEX/replacement/" # substitute text ONCE (-E uses modern REGEX)
sed -E "s/REGEX/replacement/g" # substitute text multiple times (every match)
wc [FILE] # print newline, word and byte count for each file
wc [-m --chars] FILE # print character count
wc [-c --bytes] FILE # print bytes count
wc [-l --lines] FILE # print lines count
wc [-w --words] FILE # print word count
sort [FILE] # sort lines of a text file
uniq [INPUT [OUTPUT]] # report or omit repeated lines (from INPUT to OUTPUT)
uniq [-c --count] # prefix lines w/ number of occurrences
uniq [-d --repeated] # print only duplicare lines, one for each group
uniq [-D] # print only duplicare lines
paste [FILE] # merge lines of files
paste [-d --delimiters=LIST] # use delimiters from LIST
paste [- --serial] # paste one file at a time instead of in parallel
awk '{program}' # pattern scanning and processing language
awk [-f --file PROGRAM_FILE] # read program source from PROGRAM_FILE instead of from first argument
bc [-hlwsqv long-options] [FILE] # arbitrary precision calculator language
bc [-l --mathlib] [FILE] # use standard math library
```

260
docs/bash/scripting.md Normal file
View file

@ -0,0 +1,260 @@
# Bash Scripting
[Bash Manual](https://www.gnu.org/software/bash/manual/)
`Ctrl+Shift+C`: copy
`Ctrl+Shift+C`: paste
## Bash Use Modes
Interactive mode --> shell waits for user's commands
Non-interactive mode --> shell runs scripts
## File & Directories Permissions
File:
- `r`: Read. Can see file content
- `w`: Write. Can modify file content
- `x`: Execute. Can execute file
Directory:
- `r`: Read. Can see dir contents
- `w`: CRD. Can create, rename and delete files
- `x`: Search. Can access and navigate inside the dir. Necessary to operate on files
## File Descriptors
`FD 0` "standard input" --> Channel for standard input (default keyboard)
`FD 1` "standard output" --> Channel for the default output (default screen)
`FD 2` "standard error" --> Channel for error messages, info messages, prompts (default keyboard)
File descriptors chan be joined to create streams that lead to files, devices or other processes.
Bash gets commands by reading lines.
As soon as it's read enough lines to compose a complete command, bash begins running that command.
Usually, commands are just a single line long. An interactive bash session reads lines from you at the prompt.
Non-interactive bash processes read their commands from a file or stream.
Files with a shebang as their first line (and the executable permission) can be started by your system's kernel like any other program.
### First Line Of Bash
`#!/bin/env bash`
shebang indicating which interpreter to use
### Simple Command
```bash
[ var=value ... ] command [ arg ... ] [ redirection ... ] # [.] is optional component
```
### Pipelines (commands concatenation)
```bash
command | file.ext # link the first process' standard output to the second process' standard input
command |& file.ext # link the first process' standard output & standard error to the second process' standard input
```
### Lists (sequence of commands)
```bash
command_1; command_2; ... # execute command in sequence, one after the other
command_1 || command_2 || ... # execute successive commands only if preceding ones fail
command_1 && command_2 && .. # execute successive commands only if preceding ones succeeds
```
### COMPOUND COMMANDs (multiple commands as one)
```bash
# block of commands executed as one
<keyword>
command_1; command_2; ...
<end_keyword>
{command_1; command_2; ...} # sequence of commands executed as one
```
### Functions (blocks of easily reusable code)
`function_name () {compound_command}`
Bash does not accept func arguments, parentheses must be empty
## Command names & Running programs
To run a command, bash uses the name of your command and performs a search for how to execute that command.
In order, bash will check whether it has a function or builtin by that name.
Failing that, it will try to run the name as a program.
If bash finds no way to run your command, it will output an error message.
## The path to a program
When bash needs to run a program, it uses the command name to perform a search.
Bash searches the directories in your PATH environment variable, one by one, until it finds a directory that contains a program with the name of your command.
To run a program that is not installed in a PATH directory, use the path to that program as your command's name.
## Command arguments & Quoting literals
To tell a command what to do, we pass it arguments. In bash, arguments are tokens, that are separated from each other by blank space.
To include blank space in an argument's value, you need to either quote the argument or escape the blank space within.
Failing that, bash will break your argument apart into multiple arguments at its blank space.
Quoting arguments also prevents other symbols in it from being accidentally interpreted as bash code.
## Managing a command's input and output using redirection
By default, new commands inherit the shell's current file descriptors.
We can use redirections to change where a command's input comes from and where its output should go to.
File redirection allows us to stream file descriptors to files.
We can copy file descriptors to make them share a stream. There are also many other more advanced redirection operators.
### Redirections
```bash
[x]>file # make FD x write to file
[x]<file # make FD x read from file
[x]>&y # make FD x write to FD y's stream
[x]<&y # make FD x read from FD y's stream
&>file # make both FD 1 (standard output) & FD 2 (standard error) write to file
[x]>>file # make FD x append to end of file
x>&-, x<&- # close FD x (stream disconnected from FD x)
[x]>&y-, [x]<&y- # replace FD x with FD y
[x]<>file # open FD x for both reading and writing to file
```
## Pathname Expansion (filname pattern [glob] matching)
`*` matches any kind of text (even no text).
`?` matches any single character.
`[characters]` matches any single character in the given set.
`[[:classname:]]` specify class of characters to match.
`{}` expand list of arguments (applies command to each one)
`shopt -s extglob` enables extended globs (patterns)
`+(pattern [| pattern ...])` matches when any of the patterns in the list appears, once or many times over. ("at least one of ...").
`*(pattern [| pattern ...])` matches when any of the patterns in the list appears, once, not at all, or many times over. ("however many of ...").
`?(pattern [| pattern ...])` matches when any of the patterns in the list appears, once, not at all, or many times over. ("however many of ...").
`@(pattern [| pattern ...])` matches when any of the patterns in the list appears just once. ("one of ...").
`!(pattern [| pattern ...])` matches only when none of the patterns in the list appear. ("none of ...").
## Command Substitution
With Command Substitution, we effectively write a command within a command, and we ask bash to expand the inner command into its output and use that output as argument data for the main command.
```bash
$(inner_command) # $ --> value-expansion prefix
command !* # !* expands to everything except the first argument in the previous line
command !$ # refers to the last argument of the previous command
sudo !! # !! expands to the entire previous command
```
## Shell Variables
```bash
varname=value # variable assignment
varname="$(command)" # command substitution, MUST be double-quoted
"$varname", "${varname}" # variable expansion, MUST be double-quoted (name substituted w/ variable content)
$$ # pid
$# # number of arguments passed
$@ # all arguments passed
${n} # n-th argument passed to the command
$0 # name of the script
$_ # last argument passed to the command
$? # error message of the last (previous) command
!! # executes last command used (echo !! prints the last command)
```
## Parameter Expansion Modifiers (in double-quotes)
`${parameter#pattern}` removes the shortest string that matches the pattern if it's at the start of the value.
`${parameter##pattern}` removes the longest string that matches the pattern if it's at the start of the value.
`${parameter%pattern}` removes the shortest string that matches the pattern if it's at the end of the value.
`${parameter%%pattern}` removes the longest string that matches the pattern if it's at the end of the value.
`${parameter/pattern/replacement}` replaces the first string that matches the pattern with the replacement.
`${parameter//pattern/replacement}` replaces each string that matches the pattern with the replacement.
`${parameter/#pattern/replacement}` replaces the string that matches the pattern at the beginning of the value with the replacement.
`${parameter/%pattern/replacement}` replaces the string that matches the pattern at the end of the value with the replacement.
`${#parameter}` expands the length of the value (in bytes).
`${parameter:start[:length]}` expands a part of the value, starting at start, length bytes long.
Counts from the end rather than the beginning by using a (space followed by a) negative value.
`${parameter[^|^^|,|,,][pattern]}` expands the transformed value, either upper-casing or lower-casing the first or all characters that match the pattern.
Omit the pattern to match any character.
## Decision Statements
### If Statement
Only the final exit code after executing the entire list is relevant for the branch's evaluation.
```bash
if command_list; then
command_list;
elif command_list; then
command_list;
else command_list;
fi
```
### Test Command
`[[ argument_1 <operator> argument_2 ]]`
### Arithmetic expansion and evaluation
`(( expression ))`
### Comparison Operators
```bash
[[ "$a" -eq "$b" ]] # is equal to
[[ "$a" -ne "$b" ]] # in not equal to
[[ "$a" -gt "$b" ]] # greater than
[[ "$a" -ge "$b" ]] # greater than or equal to
[[ "$a" -lt "$b" ]] # less than
[[ "$a" -le "$b" ]] # less than or equal to
```
### Arithmetic Comparison Operators
```bash
(("$a" > "$b")) # greater than
(("$a" >= "$b")) # greater than or equal to
(("$a" < "$b")) # less than
(("$a" <= "$b")) # less than or equal to
```
### String Comparison Operators
```bash
[ "$a" = "$b" ] # is equal to (whitespace around operator)
[[ $a == z* ]] # True if $a starts with an "z" (pattern matching)
[[ $a == "z*" ]] # True if $a is equal to z* (literal matching)
[ $a == z* ] # File globbing and word splitting take place
[ "$a" == "z*" ] # True if $a is equal to z* (literal matching)
[ "$a" != "$b" ] # is not equal to, pattern matching within a [[ ... ]] construct
[[ "$a" < "$b" ]] # is less than, in ASCII alphabetical order
[ "$a" \< "$b" ] # "<" needs to be escaped within a [ ] construct.
[[ "$a" > "$b" ]] # is greater than, in ASCII alphabetical order
[ "$a" \> "$b" ] # ">" needs to be escaped within a [ ] construct.
```
## Commands short circuit evaluation
```bash
command_1 || command_2 # if command_1 fails executes command_2
command_1 && command_2 # executes command_2 only if command_1 succeeds
```
## Loops
```bash
for var in iterable ; do
# command here
done
```

1181
docs/css/css.md Normal file

File diff suppressed because it is too large Load diff

443
docs/database/mongo-db.md Normal file
View file

@ -0,0 +1,443 @@
# MongoDB
The database is a container of **collections**. The collections are containers of **documents**.
The documents are _schema-less_ that is they have a dynamic structure that can change between documents in the same collection.
## Data Types
| Tipo | Documento | Funzione |
| ----------------- | ------------------------------------------------ | ----------------------- |
| Text | `"Text"` |
| Boolean | `true` |
| Number | `42` |
| Objectid | `"_id": {"$oid": "<id>"}` | `ObjectId("<id>")` |
| ISODate | `"<key>": {"$date": "YYYY-MM-DDThh:mm:ss.sssZ"}` | `ISODate("YYYY-MM-DD")` |
| Timestamp | | `Timestamp(11421532)` |
| Embedded Document | `{"a": {...}}` |
| Embedded Array | `{"b": [...]}` |
It's mandatory for each document ot have an unique field `_id`.
MongoDB automatically creates an `ObjectId()` if it's not provided.
## Databases & Collections Usage
To create a database is sufficient to switch towards a non existing one with `use <database>` (implicit creation).
The database is not actually created until a document is inserted.
```sh
show dbs # list all databases
use <database> # use a particular database
show collections # list all collection for the current database
dbs.dropDatabase() # delete current database
db.createCollection(name, {options}) # explicit collection creation
db.<collection>.insertOne({document}) # implicit collection creation
```
## Operators
```json
/* --- Update operators --- */
{ "$inc": { "<key>": <increment>, ... } } // Increment value
{ "$set": { "<key>": "<value>", ... } } // Set value
{ "$push": { "<key>": "<value>", ... } } // add a value to an array field
/* --- Query Operators --- */
{ "<key>": { "$in": [ "<value_1>", "<value_2>", ...] } } // Membership
{ "<key>": { "$nin": [ "<value_1>", "<value_2>", ...] } } // Membership
{ "<key>": { "$exists": true } } // Field Exists
/* --- Comparison Operators (DEFAULT: $eq) --- */
{ "<key>": { "$gt": "<value>" }} // >
{ "<key>": { "$gte": "<value>" }} // >=
{ "<key>": { "$lt": "<value>" }} // <
{ "<key>": { "$lte": "<value>" }} // <=
{ "<key>": { "$eq": "<value>" }} // ==
{ "<key>": { "$ne": "<value>" }} // !=
/* --- Logic Operators (DEFAULT $and) --- */
{ "$and": [ { <statement> }, ...] }
{ "$or": [ { <statement> }, ...] }
{ "$nor": [ { <statement> }, ...] }
{ "$not": { <statement> } }
```
### Expressive Query Operator
`$<key>` is used to access the value of the field dynamically
```json
{ "$expr": { <expression> } } // aggregation expression, variables, conditional expressions
{ "$expr": { "$comparison_operator": [ "$<key>", "$<key>" ] } } // compare field values
```
## CRUD Operations
### Create
It's possible to insert a single document with the command `insertOne()` or multiple documents with `insertMany()`.
Insertion results:
- error -> rollback
- success -> entire documents gets saved
```sh
# explicit collection creation, all options are optional
db.createCollection( <name>,
{
capped: <boolean>,
autoIndexId: <boolean>,
size: <number>,
max: <number>,
storageEngine: <document>,
validator: <document>,
validationLevel: <string>,
validationAction: <string>,
indexOptionDefaults: <document>,
viewOn: <string>,
pipeline: <pipeline>,
collation: <document>,
writeConcern: <document>
}
)
db.createCollection("name", { capped: true, size: max_bytes, max: max_docs_num } ) # creation of a capped collection
# SIZE: int - will be rounded to a multiple of 256
# implicit creation at doc insertion
db.<collection>.insertOne({ document }, options) # insert a document in a collection
db.<collection>.insertMany([ { document }, { document }, ... ], options) # insert multiple docs
db.<collection>.insertMany([ { document }, { document } ] , { "ordered": false }) # allow the unordered insertion, only documents that cause errors wont be inserted
```
**NOTE**: If `insertMany()` fails the already inserted documents are not rolled back but all the successive ones (even the correct ones) will not be inserted.
### Read
```sh
db.<collection>.findOne() # find only one document
db.<collection>.find(filter) # show selected documents
db.<collection>.find(filter, {"<key>": 1}) # show selected values form documents (1 or true => show, 0 or false => don't show, cant mix 0 and 1)
db.<collection>.find(filter, {_id: 0, "<key>": 1}) # only _id can be set to 0 with other keys at 1
db.<collection>.find().pretty() # show documents formatted
db.<collection>.find().limit(n) # show n documents
db.<collection>.find().limit(n).skip(k) # show n documents skipping k docs
db.<collection>.find().count() # number of found docs
db.<collection>.find().sort({key1: 1, ... , key_n: -1}) # show documents sorted by specified keys in ascending (1) or descending (-1) order
# GeoJSON - https://docs.mongodb.com/manual/reference/operator/query/near/index.html
db.<collection>.find(
{
<location field>: {
$near: {
$geometry: { type: "Point", coordinates: [ <longitude> , <latitude> ] },
$maxDistance: <distance in meters>,
$minDistance: <distance in meters>
}
}
}
)
db.<collection>.find().hint( { "<key>": 1 } ) # specify the index
db.<collection>.find().hint( "index-name" ) # specify the index using the index name
db.<collection>.find().hint( { $natural : 1 } ) # force the query to perform a forwards collection scan
db.<collection>.find().hint( { $natural : -1 } ) # force the query to perform a reverse collection scan
```
### Update
[Update Operators](https://docs.mongodb.com/manual/reference/operator/update/ "Update Operators Documentation")
```sh
db.<collection>.updateOne(filter, $set: {"<key>": value}) # add or modify values
db.<collection>.updateOne(filter, $set: {"<key>": value}, {upsert: true}) # add or modify values, if attribute doesn't exists create it
db.<collection>.updateMany(filter, update)
db.<collection>.replaceOne(filter, { document }, options)
```
### Delete
```sh
db.<collection>.deleteOne(filter, options)
db.<collection>.deleteMany(filter, options)
db.<collection>.drop() # delete whole collection
db.dropDatabase() # delete entire database
```
## [Mongoimport](https://docs.mongodb.com/database-tools/mongoimport/)
Utility to import all docs into a specified collection.
If the collection already exists `--drop` deletes it before reuploading it.
**WARNING**: CSV separators must be commas (`,`)
```sh
mongoimport <options> <connection-string> <file>
--uri=<connectionString>
--host=<hostname><:port>, -h=<hostname><:port>
--username=<username>, -u=<username>
--password=<password>, -p=<password>
--collection=<collection>, -c=<collection> # Specifies the collection to import.
--ssl # Enables connection to a mongod or mongos that has TLS/SSL support enabled.
--type <json|csv|tsv> # Specifies the file type to import. DEFAULT: json
--drop # drops the collection before importing the data from the input.
--headerline # if file is CSV and first line is header
--jsonarray # Accepts the import of data expressed with multiple MongoDB documents within a single json array. MAX 16 MB
```
## [Mongoexport](https://docs.mongodb.com/database-tools/mongoexport/)
Utility to export documents into a specified file.
```sh
mongoexport --collection=<collection> <options> <connection-string>
--uri=<connectionString>
--host=<hostname><:port>, -h=<hostname><:port>
--username=<username>, -u=<username>
--password=<password>, -p=<password>
--db=<database>, -d=<database>
--collection=<collection>, -c=<collection>
--type=<json|csv>
--out=<file>, -o=<file> #Specifies a file to write the export to. DEFAULT: stdout
--jsonArray # Write the entire contents of the export as a single json array.
--pretty # Outputs documents in a pretty-printed format JSON.
--skip=<number>
--limit=<number> # Specifies a maximum number of documents to include in the export
--sort=<JSON> # Specifies an ordering for exported results
```
## [Mongodump][mongodump_docs] & [Mongorestore][mongorestore_docs]
`mongodump` exports the content of a running server into `.bson` files.
`mongorestore` Restore backups generated with `mongodump` to a running server.
[mongodump_docs]: https://docs.mongodb.com/database-tools/mongodump/
[mongorestore_docs]: https://docs.mongodb.com/database-tools/mongorestore/
## Relations
**Nested / Embedded Documents**:
- Group data logically
- Optimal for data belonging together that do not overlap
- Should avoid nesting too deep or making too long arrays (max doc size 16 mb)
```json
{
"_id": Objectid()
"<key>": "value"
"<key>": "value"
"innerDocument": {
"<key>": "value"
"<key>": "value"
}
}
```
**References**:
- Divide data between collections
- Optimal for related but shared data used in relations or stand-alone
- Allows to overtake nesting and size limits
NoSQL databases do not have relations and references. It's the app that has to handle them.
```json
{
"<key>": "value"
"references": ["id1", "id2"]
}
// referenced
{
"_id": "id1"
"<key>": "value"
}
```
## [Indexes](https://docs.mongodb.com/manual/indexes/ "Index Documentation")
Indexes support the efficient execution of queries in MongoDB.
Without indexes, MongoDB must perform a _collection scan_ (_COLLSCAN_): scan every document in a collection, to select those documents that match the query statement.
If an appropriate index exists for a query, MongoDB can use the index to limit the number of documents it must inspect (_IXSCAN_).
Indexes are special data structures that store a small portion of the collection's data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field. The ordering of the index entries supports efficient equality matches and range-based query operations. In addition, MongoDB can return sorted results by using the ordering in the index.
Indexes _slow down writing operations_ since the index must be updated at every writing.
![IXSCAN](../img/mongodb_ixscan.png ".find() using an index")
### [Index Types](https://docs.mongodb.com/manual/indexes/#index-types)
- **Normal**: Fields sorted by name
- **Compound**: Multiple Fields sorted by name
- **Multikey**: values of sorted arrays
- **Text**: Ordered text fragments
- **Geospatial**: ordered geodata
**Sparse** indexes only contain entries for documents that have the indexed field, even if the index field contains a null value. The index skips over any document that is missing the indexed field.
### Diagnosis and query planning
```sh
db.<collection>.find({...}).explain() # explain won't accept other functions
db.explain().<collection>.find({...}) # can accept other functions
db.explain("executionStats").<collection>.find({...}) # more info
```
### Index Creation
```sh
db.<collection>.createIndex( <key and index type specification>, <options> )
db.<collection>.createIndex( { "<key>": <type>, "<key>": <type>, ... } ) # normal, compound or multikey (field is array) index
db.<collection>.createIndex( { "<key>": "text" } ) # text index
db.<collection>.createIndex( { "<key>": 2dsphere } ) # geospatial 2dsphere index
# sparse index
db.<collection>.createIndex(
{ "<key>": <type>, "<key>": <type>, ... },
{ sparse: true } # sparse option
)
# custom name
db.<collection>.createIndex(
{ <key and index type specification>, },
{ name: "index-name" } # name option
)
```
### [Index Management](https://docs.mongodb.com/manual/tutorial/manage-indexes/)
```sh
# view all db indexes
db.getCollectionNames().forEach(function(collection) {
indexes = db[collection].getIndexes();
print("Indexes for " + collection + ":");
printjson(indexes);
});
db.<collection>.getIndexes() # view collection's index
db.<collection>.dropIndexes() # drop all indexes
db.<collection>.dropIndex( { "index-name": 1 } ) # drop a specific index
```
## Database Profiling
Profiling Levels:
- `0`: no profiling
- `1`: data on operations slower than `slowms`
- `2`: data on all operations
Logs are saved in the `system.profile` _capped_ collection.
```sh
db.setProfilingLevel(n) # set profiler level
db.setProfilingLevel(1, { slowms: <ms> })
db.getProfilingStatus() # check profiler status
db.system.profile.find().limit(n).sort( {} ).pretty() # see logs
db.system.profile.find().limit(n).sort( { ts : -1 } ).pretty() # sort by decreasing timestamp
```
## Roles and permissions
**Authentication**: identifies valid users
**Authorization**: identifies what a user can do
- **userAdminAnyDatabase**: can admin every db in the instance (role must be created on admin db)
- **userAdmin**: can admin the specific db in which is created
- **readWrite**: can read and write in the specific db in which is created
- **read**: can read the specific db in which is created
```sh
# create users in the current MongoDB instance
db.createUser(
{
user: "dbAdmin",
pwd: "password",
roles:[
{
role: "userAdminAnyDatabase",
db:"admin"
}
]
},
{
user: "username",
pwd: "password",
roles:[
{
role: "role",
db: "database"
}
]
}
)
```
## Sharding
**Sharding** is a MongoDB concept through which big datasets are subdivided in smaller sets and distributed towards multiple instances of MongoDB.
It's a technique used to improve the performances of large queries towards large quantities of data that require al lot of resources from the server.
A collection containing several documents is splitted in more smaller collections (_shards_)
Shards are implemented via cluster that are none other a group of MongoDB instances.
Shard components are:
- Shards (min 2), instances of MongoDB that contain a subset of the data
- A config server, instance of MongoDB which contains metadata on the cluster, that is the set of instances that have the shard data.
- A router (or `mongos`), instance of MongoDB used to redirect the user instructions from the client to the correct server.
![Shared Cluster](../img/mongodb_shared-cluster.png "Components of a shared cluster")
### [Replica set](https://docs.mongodb.com/manual/replication/)
A **replica set** in MongoDB is a group of `mongod` processes that maintain the `same dataset`. Replica sets provide redundancy and high availability, and are the basis for all production deployments.
## Aggregations
Sequence of operations applied to a collection as a _pipeline_ to get a result: `db.collection.aggregate(pipeline, options)`.
[Aggregations Stages][aggeregation_stages_docs]:
- `$lookup`: Right Join
- `$match`: Where
- `$sort`: Order By
- `$project`: Select \*
- ...
[aggeregation_stages_docs]: https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/
Example:
```sh
db.collection.aggregate([
{
$lookup: {
from: <collection to join>,
localField: <field from the input documents>,
foreignField: <field from the documents of the "from" collection>,
as: <output array field>
}
},
{ $match: { <query> } },
{ $sort: { ... } },
{ $project: { ... } },
{ ... }
])
```

109
docs/database/redis.md Normal file
View file

@ -0,0 +1,109 @@
# [Redis](https://redis.io/)
Redis is in the family of databases called **key-value stores**.
The essence of a key-value store is the ability to store some data, called a value, inside a key. This data can later be retrieved only if we know the exact key used to store it.
Often Redis it is called a *data structure* server because it has outer key-value shell, but each value can contain a complex data structure, such as a string, a list, a hashes, or ordered data structures called sorted sets as well as probabilistic data structures like *hyperloglog*.
## [Redis Commands](https://redis.io/commands)
### Server Startup
```bash
redis-server # start the server
redis-cli
```
### [Key-Value Pairs](https://redis.io/commands#generic)
```sh
SET <key> <value> [ EX <seconds> ] # store a key-value pair, TTL optional
GET <key> # read a key content
EXISTS <key> # check if a key exists
DEL <key> # delete a key-value pair
INCR <key> # atomically increment a number stored at a given key
INCRBY <key> <amount> # increment the number contained inside a key by a specific amount
DECR <key>
DECRBY <key> <amount>
# re-setting the key will make it permanent (TTL -1)
EXPIRE <key> <seconds> # make the key expire after <second> seconds
TTL <key> # see remaining seconds before expiry
PEXPIRE <key> <seconds> # make the key expire after <second> milli-seconds
PTTL <key> # see remaining milli-seconds before expiry
PERSIST <key> # make the key permanent
```
### [Lists](https://redis.io/commands#list)
A list is a series of ordered values.
```sh
RPUSH <key> <value1> <value2> ... # add one or more values to the end of the list
LPUSH <key> <value1> <value2> ... # add one or more values to the start of a list
LLEN # number of items in the list
LRANGE <key> <start_index> <end_index> # return a subset of the list, end index included. Negative indexes count backwards from the end
LPOP # remove and return the first item fro the list
RPOP # remove and return the last item fro the list
```
### [Sets](https://redis.io/commands#set)
A set is similar to a list, except it does not have a specific order and each element may only appear once.
```sh
SADD <key> <value1> <value2> ... # add one or more values to the set (return 0 if values are already inside)
SREM <key> <value> # remove the given member from the set, return 1 or 0 to signal if the member was actually there or not.
SPOP <key> <value> # remove and return value from the set
SISMEMBER <key> <value> # test if value is in the set
SMEMBERS <key> # lis of all set items
SUINION <key1> <key2> ... # combine two or more sets and return the list of all elements.
```
### [Sorted Sets](https://redis.io/commands#sorted_set)
Sets are a very handy data type, but as they are unsorted they don't work well for a number of problems. This is why Redis 1.2 introduced Sorted Sets.
A sorted set is similar to a regular set, but now each value has an associated score. This score is used to sort the elements in the set.
```sh
ZADD <key> <score> <value> # add a value with it's score
ZRANGE <key> <start_index> <end_index> # return a subset of the sortedSet
...
```
### [Hashes](https://redis.io/commands#hash)
Hashes are maps between string fields and string values, so they are the perfect data type to represent objects.
```sh
HSET <key> <field> <value> [ <field> <value> ... ] # set the string of a hash field
HSETNX <key> <field> <value> # set the value of a hash field, only if the field does not exist
HEXISTS <key> <field> # determine if a hash field exists
HLEN <key> # get the number of fields in a hash
HSTRLEN <key> <field> # get the length of the value of a hash field
HGETALL <key> # get all fields and values in a hash
HGET <key> <field> # get data on a single field
HKEYS <key> # get all the fields in a hash
HVALS <key> # get all the values in a hash
HDEL <key> <field_1> <field_2> ... # delete one or more field hashes
HMGET <key> <field> [<field> ...] # get the values of all the given hash fields
HMSET <key> <field> <value> [<field> <value> ...] # set multiple hash fields to multiple values
HINCRBY <key> <field> <amount> # increment the integer value of a hash field by the given number
HINCRBYFLOAT <key> <field> <amount> # increment the float value of a hash field by the given amount
HSCAN <key> <cursor> [MATCH <pattern>] [COUNT <count>] # incrementally iterate hash fields and associated values
```

281
docs/database/sql.md Normal file
View file

@ -0,0 +1,281 @@
# SQL
`mysql -u root`: avvio mysql come utente root
## DDL
```sql
show databases; -- mostra database
CREATE DATABASE <database>; -- database creation
use <database_name>; -- usa un database particolare
exit; -- exit mysql
show tables; -- mostra tabelle del database
-- INLINE COMMENT
/* MULTI-LINE COMMENT */
```
### Table Creation
```sql
CREATE TABLE <table_name>
(<field_name> <field_type> <option>,
...);
```
### PRIMARY KEY from multiple fields
```sql
CREATE TABLE <table_name>(
...,
PRIMARY KEY (<field1>, ...),
);
```
### Table Field Options
```sql
PRIMARY KEY -- marks primary key as field option
NOT NULL -- marks a necessary field
REFERENCES <table> (<field>) -- adds foreign key reference
UNIQUE (<field>) -- set field as unique (MySQL)
<field> UNIQUE -- T-SQL
```
### Table Modification
```sql
ALTER TABLE <table>
ADD PRIMARY KEY (<field>, ...), -- definition of PK after table creation
ADD <field_name> <field_type> <option>; -- addition of a new field, field will have no value in the table
ALTER TABLE <table>
CHANGE <field_name> <new_name> <new_type>;
ALTER COLUMN <field_name> <new_name> <new_type>; -- T-SQL
ALTER TABLE <table>
DROP <field>;
ALTER TABLE <table>
ADD FOREIGN KEY (<field>) REFERENCES <TABLE> (<FIELD>);
```
## DML
### Data Insertion
```sql
INSERT INTO <table> (field_1, ...) VALUES (value_1, ...), (value_1, ...);
INSERT INTO <table> VALUES (value_1, ...), (value_1, ...); -- field order MUST respect tables's columns order
```
### Data Update
```sql
UPDATE <table> SET <field> = <value>, <field> = <value>, ... WHERE <condition>;
```
### Data Elimination
```sql
DELETE FROM <table> WHERE <condition>
DELETE FROM <table> -- empty the table
```
## Data Selection
`*` Indica tutti i campi
```sql
SELECT * FROM <table>; -- show table contents
SHOW columns FROM <table>; -- show table columns
DESCRIBE <table>; -- shows table
```
### Alias
```sql
SELECT <field> as <alias>; -- shows <field/funzione> with name <alias>
```
### Conditional Selection
```sql
SELECT * FROM <table> WHERE <condition>; -- shows elements that satisfy the condition
AND, OR, NOT -- logic connectors
SELECT * FROM <table> WHERE <field> Between <value_1> AND <value_2>;
```
### Ordering
```sql
SELECT * FROM <table> ORDER BY <field>, ...; -- shows the table ordered by <field>
SELECT * FROM <table> ORDER BY <field>, ... DESC; -- shows the table ordered by <field>, decreasing order
SELECT * FROM <table> ORDER BY <field>, ... LIMIT n; -- shows the table ordered by <field>, shows n items
SELECT TOP(n) * FROM <table> ORDER BY <field>, ...; -- T-SQL
```
## Grouping
```sql
SELECT * FROM <table> GROUP BY <field>;
SELECT * FROM <table> GROUP BY <field> HAVING <condition>;
SELECT DISTINCT <field> FROM <table>; -- shows elements without repetitions
```
### Ricerca caratteri in valori
`%` 0+ caratteri
```sql
SELECT * FROM <table> WHERE <field> LIKE '<char>%'; -- selects items in <field> that start with <char>
SELECT * FROM <table> WHERE <field> LIKE '%<char>'; -- selects items in <field> that end with <char>
SELECT * FROM <table> WHERE <field> LIKE '%<char>%'; -- selects items in <field> that contain <char>
SELECT * FROM <table> WHERE <field> NOT LIKE '%<char>%'; -- selects items in <field> that do not contain <char>
```
### Selection from multiple tables
```sql
SELECT a.<field>, b.<field> FROM <table> AS a, <table> AS b
WHERE a.<field> ...;
```
## Functions
```sql
SELECT COUNT(*) FROM <field>; -- count of items in <field>
SELECT MIN(*) FROM <table>; -- min value
SELECT MAX(*) FROM <table>; -- max value
SELECT AVG(*) FROM <table>; -- mean of values
ALL (SELECT ...)
ANY (SELECT ...)
```
## Nested Queries
```sql
SELECT * FROM <table> WHERE EXISTS (SELECT * FROM <table>) -- selected field existing in subquery
SELECT * FROM <table> WHERE NOT EXISTS (SELECT * FROM <table>) -- selected field not existing in subquery
```
## New table from data
Create new table with necessary fields:
```sql
CREATE TABLE <table> (
(<field_name> <field_type> <option>,
...);
)
```
Fill fields with data from table:
```sql
INSERT INTO <table>
SELECT <fields> FROM <TABLE> WHERE <condition>;
```
## Join
```sql
SELECT * FROM <table1> JOIN <table2> ON <table1>.<field> = <table2>.<field>;
SELECT * FROM <table1> LEFT JOIN <table2> ON <condition>;
SELECT * FROM <table1> RIGHT JOIN <table2> ON <condition>
```
[Inner Join, Left Join, Right Join, Full Outer Join](https://www.diffen.com/difference/Inner_Join_vs_Outer_Join)
## Multiple Join
```sql
SELECT * FROM <table1>
JOIN <table2> ON <table1>.<field> = <table2>.<field>
JOIN <table3> ON <table2>.<field> = <table3>.<field>;
```
[char, nchar, varchar, nvarchar](https://stackoverflow.com/questions/176514/what-is-the-difference-between-char-nchar-varchar-and-nvarchar-in-sql-server)
---
## T-SQL (MSSQL Server)
### T-SQL Insert From table
```sql
USE [<db_name>]
SET IDENTITY_INSERT [<destination_table>] ON
INSERT INTO <table> (field_1, ...)
SELECT (field_1, ...) FROM <source_table>
SET IDENTITY_INSERT [<destination_table>] OFF
```
### T-SQL Parametric Query
```sql
-- variable declaration
DECLARE @var_name <type>
-- init variable (input parameter)
SET @var_name = <value>
-- use in query (memorize data)
SELECT @var_name = COUNT(*) -- query won't show results in the "table view" since param is used in SELECT
FROM <table> ...
-- display message (query won't show results in the "table view")
PRINT 'Text: ' + @var_name
PRINT 'Text: ' + CONVERT(type, @var_name) -- convert data before printing
GO
```
### T-SQL View
A view represents a virtual table. Join multiple tables in a view and use the view to present the data as if the data were coming from a single table.
```sql
CREATE VIEW <name> AS
SELECT * FROM <table> ...
```
### T-SQL Stored Procedure
[Stored Procedure How-To](https://docs.microsoft.com/en-us/sql/relational-databases/stored-procedures/create-a-stored-procedure "Create a Stored Procedure - Microsoft Docs")
[T-SQL Stored Procedure](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-procedure-transact-sql)
Stored Procedure Definition:
```sql
CREATE PROCEDURE <Procedure_Name>
-- Add the parameters for the stored procedure here
<@Param1> <Datatype_For_Param1> = <Default_Value_For_Param1>,
<@Param2> <Datatype_For_Param2>
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
SET NOCOUNT ON; -- don't return number of selected rows
-- Insert statements for procedure here
SELECT ...
END
GO
```
Stored Procedure call in query:
```sql
USE <database>
GO
-- Stored Procedure call
EXECUTE <Procedure_Name>
-- or
EXEC <Procedure_Name>
```

2925
docs/dotnet/C#/C#.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,195 @@
# [Async Programming](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/)
## Task Asynchronous Programming Model ([TAP][tap_docs])
[tap_docs]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/task-asynchronous-programming-model
It's possible to avoid performance bottlenecks and enhance the overall responsiveness of an application by using asynchronous programming.
However, traditional techniques for writing asynchronous applications can be complicated, making them difficult to write, debug, and maintain.
C# 5 introduced a simplified approach, **async programming**, that leverages asynchronous support in the .NET Runtime.
The compiler does the difficult work that the developer used to do, and the application retains a logical structure that resembles synchronous code.
In performance-sensitive code, asynchronous APIs are useful, because instead of wasting resources by forcing a thread to sit and wait for I/O to complete, a thread can kick off the work and then do something else productive in the meantime.
The `async` and `await` keywords in C# are the heart of async programming.
```cs
public async Task<TResult> MethodAsync
{
Task<TResult> resultTask = obj.OtherMethodAsync();
DoIndependentWork();
TResult result = await resultTask;
// if the is no work to be done before awaiting
TResult result = await obj.OtherMethodAsync();
return result;
}
```
Characteristics of Async Methods:
- The method signature includes an `async` modifier.
- The name of an async method, by convention, ends with an "Async" suffix.
- The return type is one of the following types:
- `Task<TResult>` if the method has a return statement in which the operand has type `TResult`.
- `Task` if the method has no return statement or has a return statement with no operand.
- `void` if it's an async event handler.
- Any other type that has a `GetAwaiter` method (starting with C# 7.0).
- Starting with C# 8.0, `IAsyncEnumerable<T>`, for an async method that returns an async stream.
The method usually includes at least one `await` expression, which marks a point where the method can't continue until the awaited asynchronous operation is complete.
In the meantime, the method is suspended, and control returns to the method's caller.
### Threads
Async methods are intended to be non-blocking operations. An `await` expression in an async method doesn't block the current thread while the awaited task is running. Instead, the expression signs up the rest of the method as a continuation and returns control to the caller of the async method.
The `async` and `await` keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active. It's possible to use `Task.Run` to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.
The async-based approach to asynchronous programming is preferable to existing approaches in almost every case. In particular, this approach is better than the `BackgroundWorker` class for I/O-bound operations because the code is simpler and there is no need to guard against race conditions.
In combination with the `Task.Run` method, async programming is better than `BackgroundWorker` for CPU-bound operations because async programming separates the coordination details of running the code from the work that `Task.Run` transfers to the thread pool.
### Naming Convention
By convention, methods that return commonly awaitable types (for example, `Task`, `Task<T>`, `ValueTask`, `ValueTask<T>`) should have names that end with *Async*. Methods that start an asynchronous operation but do not return an awaitable type should not have names that end with *Async*, but may start with "Begin", "Start", or some other verb to suggest this method does not return or throw the result of the operation.
## [Async Return Types](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/async-return-types)
### `Task` return type
Async methods that don't contain a return statement or that contain a return statement that doesn't return an operand usually have a return type of `Task`. Such methods return `void` if they run synchronously.
If a `Task` return type is used for an async method, a calling method can use an `await` operator to suspend the caller's completion until the called async method has finished.
### `Task<TResult>` return type
The `Task<TResult>` return type is used for an async method that contains a return statement in which the operand is `TResult`.
The `Task<TResult>.Result` property is a **blocking property**. If it's accessed it before its task is finished, the thread that's currently active is blocked until the task completes and the value is available.
In most cases, access the value by using `await` instead of accessing the property directly.
### `void` return type
The `void` return type is used in asynchronous event handlers, which require a `void` return type. For methods other than event handlers that don't return a value, it's best to return a `Task` instead, because an async method that returns `void` can't be awaited.
Any caller of such a method must continue to completion without waiting for the called async method to finish. The caller must be independent of any values or exceptions that the async method generates.
The caller of a void-returning async method *can't catch exceptions thrown from the method*, and such unhandled exceptions are likely to cause the application to fail.
If a method that returns a `Task` or `Task<TResult>` throws an exception, the exception is stored in the returned task. The exception is re-thrown when the task is awaited.
Therefore, make sure that any async method that can produce an exception has a return type of `Task` or `Task<TResult>` and that calls to the method are awaited.
### Generalized async return types and `ValueTask<TResult>`
Starting with C# 7.0, an async method can return any type that has an accessible `GetAwaiter` method.
Because `Task` and `Task<TResult>` are **reference types**, memory allocation in performance-critical paths, particularly when allocations occur in tight loops, can adversely affect performance. Support for generalized return types means that it's possible to return a lightweight **value type** instead of a reference type to avoid additional memory allocations.
.NET provides the `System.Threading.Tasks.ValueTask<TResult>` structure as a lightweight implementation of a generalized task-returning value. To use the `System.Threading.Tasks.ValueTask<TResult>` type, add the **System.Threading.Tasks.Extensions** NuGet package to the project.
### Async Composition
```cs
public async Task DoOperationsConcurrentlyAsync()
{
Task[] tasks = new Task[3];
tasks[0] = DoOperation0Async();
tasks[1] = DoOperation1Async();
tasks[2] = DoOperation2Async();
// At this point, all three tasks are running at the same time.
// Now, we await them all.
await Task.WhenAll(tasks);
}
public async Task<int> GetFirstToRespondAsync()
{
// Call two web services; take the first response.
Task<int>[] tasks = new[] { WebService1Async(), WebService2Async() };
// Await for the first one to respond.
Task<int> firstTask = await Task.WhenAny(tasks);
// Return the result.
return await firstTask;
}
```
### Execution & Synchronization Context
When the programs execution reaches an `await` expression for an operation that doesnt complete immediately, the code generated for that `await` will ensure that the
current execution context has been captured.
When the asynchronous operation completes, the remainder of the method will be executed through the execution context.
The execution context handles certain contextual information that needs to flow when one method invokes another (even when it does so indirectly)
While all `await` expressions capture the *execution context*, the decision of whether to flow *synchronization context* as well is controlled by the type being awaited.
Sometimes, it's better to avoid getting the synchronization context involved.
If work starting from a UI thread is performed, but there is no particular need to remain on that thread, scheduling every continuation through the synchronization context is unnecessary overhead.
If the asynchronous operation is a `Task`, `Task<T>`, `ValueTask` or `ValueTask<T>`, it's possible to discard the *synchronization context* by calling the `ConfigureAwait(false)`.
This returns a different representation of the asynchronous operation, and if this iss awaited that instead of the original task, it will ignore the current `SynchronizationContext` if there is one.
```cs
private async Task DownloadFileAsync(string fileName)
{
await OperationAsync(fileName).ConfigureAwait(false); // discarding original context
}
```
When writing libraries in most cases you it'ss best to call `ConfigureAwait(false)` anywhere `await` is used.
This is because continuing via the synchronization context can be expensive, and in some cases it can introduce the possibility of deadlock occurring.
The only exceptions are when are doing something that positively requires the synchronization context to be preserved, or it's know for certain that the library will only ever be used in
application frameworks that do not set up a synchronization context.
(ASP.NET Core applications do not use synchronization contexts, so it generally doesnt matter whether or not `ConfigureAwait(false)` is called in those)
## Error Handling
### Argument Validation
Inside an `async` method, the compiler treats all exceptions in the same way: none are allowed to pass up the stack as in a normal method, and they will always be reported by faulting the returned task.
This is true even of exceptions thrown before the first `await`.
If the calling method immediately calls `await` on the return task, this wont matter much—it will see the exception in any case.
But some code may choose not to wait immediately, in which case it wont see the argument exception until later.
```cs
async Task<string> MethodWithValidationAsync(string argument)
{
if(sting.IsNullOrEmpty(argument))
{
throw new ArgumentNullException(nameof(argument)); // will be thrown on await of MethodWithValidationAsync
}
// [...]
return await LongOperationAsync();
}
```
In cases where you want to throw this kind of exception straightaway, the usual technique is to write a normal method that validates the arguments before calling an async method that does the
work, and to make that second method either private or local.
```cs
// not marked with async, exception propagate directly to caller
public static Task<string> MethodWithValidationAsync(string argument)
{
if(sting.IsNullOrEmpty(argument))
{
throw new ArgumentNullException(nameof(argument)); // thrown immediately
}
return ActualMethodAsync(argument); // pass up task of inner method
}
private static async Task<string> ActualMethodAsync(string argument)
{
// [...]
return await LongOperationAsync();
}
```
**NOTE**: `await` extracts only the first exception of an `AggregateException`, this can cause the loss of information if a task (or group of tasks) has more than one error.

View file

@ -0,0 +1,296 @@
# C# Collections
## Arrays
An array is an object that contains multiple elements of a particular type. The number of elements is fixed for the lifetime of the array, so it must be specified when the array is created.
An array type is always a reference type, regardless of the element type. Nonetheless, the choice between reference type and value type elements makes a significant difference in an array's behavior.
```cs
type[] array = new type[dimension];
type array[] = new type[dimension]; //invalid
type[] array = {value1, value2, ..., valueN}; // initializer
var array = new type[] {value1, value2, ..., valueN}; // initializer (var type needs new operator)
var array = new[] {value1, value2, ..., valueN}; // initializer w/ element type inference (var type needs new operator), can be used as method arg
array[index]; // value access
array[index] = value; // value assignment
array.Length; // dimension of the array
// from IEnumerable<T>
array.OfType<Type>(); // filter array based on type, returns IEnumerable<Type>
```
### [Array Methods](https://docs.microsoft.com/en-us/dotnet/api/system.array?view=netcore-3.1#methods)
```cs
// overloaded search methods
Array.IndexOf(array, item); // return index of searched item in passed array
Array.LastIndexOf(array, item); // return index of searched item staring from the end of the array
Array.FindIndex(array, Predicate<T>) // returns the index of the first item matching the predicate (can be lambda function)
Array.FindLastIndex(array, Predicate<T>) // returns the index of the last item matching the predicate (can be lambda function)
Array.Find(array, Predicate<T>) // returns the value of the first item matching the predicate (can be lambda function)
Array.FindLast(array, Predicate<T>) // returns the value of the last item matching the predicate (can be lambda function)
Array.FindAll(array, Predicate<T>) // returns array of all items matching the predicate (can be lambda function)
Array.BinarySearch(array, value) // Searches a SORTED array for a value, using a binary search algorithm; returns the index of the found item
Array.Sort(array);
Array.Reverse(array); // reverses the order of array elements
Array.Clear(start_index, x); //removes reference to x elements starting at start index. Dimension of array unchanged (cleared elements value is set tu null)
Array.Resize(ref array, target_dimension); //expands or shrinks the array dimension. Shrinking drops trailing values. Array passed by reference.
// Copies elements from an Array starting at the specified index and pastes them to another Array starting at the specified destination index.
Array.Copy(sourceArray, sourceStartIndex, destinationArray, destinationStartIndex, numItemsToCopy);
// Copies elements from an Array starting at the first element and pastes them into another Array starting at the first element.
Array.Copy(sourceArray, destinationArray, numItemsToCopy);
Array.Clone(); // returns a shallow copy of the array
```
### Multidimensional Arrays
C# supports two multidimensional array forms: [jagged][jagg_arrays] arrays and [rectangular][rect_arrays] arrays (*matrices*).
[jagg_arrays]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/arrays/jagged-arrays
[rect_arrays]: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/arrays/multidimensional-arrays
```cs
//specify first dimension
type[][] jagged = new type[][]
{
new[] {item1, item2, item3},
new[] {item1},
new[] {item1, item2},
...
}
// shorthand
type[][] jagged =
{
new[] {item1, item2, item3},
new[] {item1},
new[] {item1, item2},
...
}
// matrices
type[,] matrix = new type[n, m]; // n * m matrix
type[,] matrix = {{}, {}, {}, ...}; // {} for each row to initialize
type[, ,] tensor = new type[n, m, o] // n * m * o tensor
matrix.Length; // total number of elements (n * m)
matrix.GetLength(int dimension); // get the size of a particular direction
// row = 0, column = 1, ...
```
## [Lists](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1)
`List<T>` stores sequences of elements. It can grow or shrink, allowing to add or remove elements.
```cs
using System.Collections.Generics;
List<T> list = new List<T>();
List<T> list = new List<T> {item_1, ...}; // initialized usable since list implements IEnumerable<T> and has Add() method (even extension method)
List<T> list = new List<T>(dimension); // set list starting dimension
List<T> list = new List<T>(IEnumerable<T>); // create a list from an enumerable collection
list.Add(item); //item insertion into the list
list.AddRange(IEnumerable<T> collection); // insert multiple items
list.Insert(index, item); // insert an item at the specified index
list.InsertRange(index, item); // insert items at the specified index
list.IndexOf(item); // return index of searched item in passed list
list.LastIndexOf(item); // return index of searched item staring from the end of the array
list.FindIndex(Predicate<T>) // returns the index of the first item matching the predicate (can be lambda function)
list.FindLastIndex(Predicate<T>) // returns the index of the last item matching the predicate (can be lambda function)
list.Find(Predicate<T>) // returns the value of the first item matching the predicate (can be lambda function)
list.FindLast(Predicate<T>) // returns the value of the last item matching the predicate (can be lambda function)
list.FindAll(Predicate<T>) // returns list of all items matching the predicate (can be lambda function)
list.BinarySearch(value) // Searches a SORTED list for a value, using a binary search algorithm; returns the index of the found item
list.Remove(item); // remove item from list
list.RemoveAt(index); // remove item at specified position
list.RemoveRange(index, quantity); // remove quantity items at specified position
list.Contains(item); // check if item is in the list
list.TrueForAll(Predicate<T>); // Determines whether every element matches the conditions defined by the specified predicate
list[index]; // access to items by index
list[index] = value; // modify to items by index
list.Count; // number of items in the list
list.Sort(); // sorts item in crescent order
list.Reverse(); // Reverses the order of the elements in the list
// from IEnumerable<T>
list.OfType<Type>(); // filter list based on type, returns IEnumerable<Type>
list.OfType<Type>().ToList(); // filter list based on type, returns List<Type>
```
## [Iterators](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/iterators)
An iterator can be used to step through collections such as lists and arrays.
An iterator method or `get` accessor performs a custom iteration over a collection. An iterator method uses the `yield return` statement to return each element one at a time.
When a `yield return` statement is reached, the current location in code is remembered. Execution is restarted from that location the next time the iterator function is called.
It's possible to use a `yield break` statement or exception to end the iteration.
**Note**: Since an iterator returns an `IEnumerable<T>` is can be used to implement a `GetEnumerator()`.
```cs
// simple iterator
public static System.Collections.IEnumerable<int> IterateRange(int start = 0, int end)
{
for(int i = start; i < end; i++){
yield return i;
}
}
```
## List & Sequence Interfaces
### [`IEnumerable<T>`](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.ienumerable-1)
Exposes the enumerator, which supports a simple iteration over a collection of a specified type.
```cs
public interface IEnumerable<out T> : IEnumerable
{
IEnumerator<T> GetEnumerator(); // return an enumerator
}
// iterate through a collection
public interface IEnumerator<T>
{
// properties
object Current { get; } // Get the element in the collection at the current position of the enumerator.
// methods
void IDisposable.Dispose(); // Perform application-defined tasks associated with freeing, releasing, or resetting unmanaged resources
bool MoveNext(); // Advance the enumerator to the next element of the collection.
void Reset(); // Set the enumerator to its initial position, which is before the first element in the collection.
}
```
**Note**: must call `Dispose()` on enumerators once finished with them, because many of them rely on this. `Reset()` is legacy and can, in some situations, throw `NotSupportedException()`.
### [`ICollection<T>`](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.icollection-1)
```cs
public interface ICollection<T> : IEnumerable<T>
{
// properties
int Count { get; } // Get the number of elements contained in the ICollection<T>
bool IsReadOnly { get; } // Get a value indicating whether the ICollection<T> is read-only
// methods
void Add (T item); // Add an item to the ICollection<T>
void Clear (); // Removes all items from the ICollection<T>
bool Contains (T item); // Determines whether the ICollection<T> contains a specific value
IEnumerator GetEnumerator (); // Returns an enumerator that iterates through a collection
bool Remove (T item); // Removes the first occurrence of a specific object from the ICollection<T>
}
```
### [`IList<T>`](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.ilist-1)
```cs
public interface IList<T> : ICollection<T>, IEnumerable<T>
{
// properties
int Count { get; } // Get the number of elements contained in the ICollection<T>
bool IsReadOnly { get; } // Get a value indicating whether the ICollection<T> is read-only
T this[int index] { get; set; } // Get or set the element at the specified index
// methods
void Add (T item); // Add an item to the ICollection<T>
void Clear (); // Remove all items from the ICollection<T>
bool Contains (T item); // Determine whether the ICollection<T> contains a specific value
void CopyTo (T[] array, int arrayIndex); // Copy the elements of the ICollection<T> to an Array, starting at a particular Array index
IEnumerator GetEnumerator (); // Return an enumerator that iterates through a collection
int IndexOf (T item); // Determine the index of a specific item in the IList<T>
void Insert (int index, T item); // Insert an item to the IList<T> at the specified index
bool Remove (T item); // Remove the first occurrence of a specific object from the ICollection<T>
oid RemoveAt (int index); // Remove the IList<T> item at the specified index
}
```
## [Dictionaries](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2)
[ValueCollection](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2.valuecollection)
[KeyCollection](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2.keycollection)
**Notes**:
- Enumerating a dictionary will return `KeyValuePair<TKey, TValue>`.
- The `Dictionary<TKey, TValue>` collection class relies on hashes to offer fast lookup (`TKey` should have a good `GetHashCode()`).
```cs
Dictionary<TKey, TValue> dict = new Dictionary<TKey, TValue>(); // init empty dict
Dictionary<TKey, TValue> dict = new Dictionary<TKey, TValue>(IEqualityComparer<TKey>); // specify key comparer (TKey must implement Equals() and GetHashCode())
// initializer (implicitly uses Add method)
Dictionary<TKey, TValue> dict =
{
{ key, value }
{ key, value },
...
}
// object initializer
Dictionary<TKey, TValue> dict =
{
[key] = value,
[key] = value,
...
}
// indexer access
dict[key]; // read value associated with key (throws KeyNotFoundException if key does not exist)
dict[key] = value; // modify value associated with key (throws KeyNotFoundException if key does not exist)
dict.Count; // number of key-value pair stored in the dict
dict.Keys; // Dictionary<TKey,TValue>.KeyCollection containing the keys of the dict
dict.Values; // Dictionary<TKey,TValue>.ValueCollection containing the values of the dict
dict.Add(key, value); // ArgumentException if the key already exists
dict.Clear(); // empty the dictionary
dict.ContainsKey(key); // check if a key is in the dictionary
dict.ContainsValue(value); // check if a value is in the dictionary
dict.Remove(key); // remove a key-value pair
dict.Remove(key, out var); // remove key-value pair and copy TValue to var parameter
dict.TryAdd(key, value); // adds a key-value pair; returns true if pair is added, false otherwise
dict.TryGetValue(key, out var); // put the value associated with kay in the var parameter; true if the dict contains an element with the specified key, false otherwise.
```
## [Sets](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.hashset-1)
Collection of non duplicate items.
```cs
HashSet<T> set = new HashSet<T>();
set.Add(T); // adds an item to the set; true if the element is added, false if the element is already present.
set.Clear(); //Remove all elements from a HashSet<T> object.
set.Contains(T); // Determine whether a HashSet<T> object contains the specified element.
set.CopyTo(T[]); // Coy the elements of a HashSet<T> object to an array.
set.CopyTo(T[], arrayIndex); // Copy the elements of a HashSet<T> object to an array, starting at the specified array index.
set.CopyTo(T[], arrayIndex, count); // Copies the specified number of elements of a HashSet<T> object to an array, starting at the specified array index.
set.CreateSetComparer(); // Return an IEqualityComparer object that can be used for equality testing of a HashSet<T> object.
set.ExceptWith(IEnumerable<T>); // Remove all elements in the specified collection from the current HashSet<T> object.
set.IntersectWith(IEnumerable<T>); // Modify the current HashSet<T> object to contain only elements that are present in that object and in the specified collection.
set.IsProperSubsetOf(IEnumerable<T>); // Determine whether a HashSet<T> object is a proper subset of the specified collection.
set.IsProperSupersetOf(IEnumerable<T>); // Determine whether a HashSet<T> object is a proper superset of the specified collection.
set.IsSubsetOf(IEnumerable<T>); // Determine whether a HashSet<T> object is a subset of the specified collection.
set.IsSupersetOf(IEnumerable<T>); // Determine whether a HashSet<T> object is a superset of the specified collection.
set.Overlaps(IEnumerable<T>); // Determine whether the current HashSet<T> object and a specified collection share common elements.
set.Remove(T); // Remove the specified element from a HashSet<T> object.
set.RemoveWhere(Predicate<T>); // Remove all elements that match the conditions defined by the specified predicate from a HashSet<T> collection.
set.SetEquals(IEnumerable<T>); // Determine whether a HashSet<T> object and the specified collection contain the same elements.
set.SymmetricExceptWith(IEnumerable<T>); // Modify the current HashSet<T> object to contain only elements that are present either in that object or in the specified collection, but not both.
set.UnionWith(IEnumerable<T>); // Modify the current HashSet<T> object to contain all elements that are present in itself, the specified collection, or both.
set.TryGetValue(T, out T); // Search the set for a given value and returns the equal value it finds, if any.
```

84
docs/dotnet/C#/linq.md Normal file
View file

@ -0,0 +1,84 @@
# LINQ
## LINQ to Objects
<!-- Page: 423/761 of "Ian Griffiths - Programming C# 8.0 - Build Cloud, Web, and Desktop Applications.pdf" -->
The term **LINQ to Objects** refers to the use of LINQ queries with any `IEnumerable` or `IEnumerable<T>` collection directly, without the use of an intermediate LINQ provider or API such as [LINQ to SQL](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/linq/) or [LINQ to XML](https://docs.microsoft.com/en-us/dotnet/standard/linq/linq-xml-overview).
LINQ to Objects will be used when any `IEnumerable<T>` is specified as the source, unless a more specialized provider is available.
### Query Expressions
All query expressions are required to begin with a `from` clause, which specifies the source of the query.
The final part of the query is a `select` (or `group`) clause. This determines the final output of the query and its system type.
```cs
// query expression
var result = from item in enumerable select item;
// where clause
var result = from item in enumerable where condition select item;
// ordering
var result = from item in enumerable orderby item.property select item; // ordered IEnumerable
// let clause, assign expression to variable to avoid re-evaluation on each cycle
var result = from item in enumerable let tmp = <sub-expr> ... // BEWARE: compiled code has a lot of overhead to satisfy let clause
// grouping (difficult to re-implement to obtain better performance)
var result = from item in enumerable group item by item.property; // returns IEnumerable<IGrouping<TKey,TElement>>
```
### How Query Expressions Expand
The compiler converts all query expressions into one or more method calls. Once it has done that, the LINQ provider is selected through exactly the same mechanisms that C# uses for any other method call.
The compiler does not have any built-in concept of what constitutes a LINQ provider.
```cs
// expanded query expression
var result = Enumerable.Where(item => condition).Select(item => item);
```
The `Where` and `Select` methods are examples of LINQ operators. A LINQ operator is nothing more than a method that conforms to one of the standard patterns.
### Methods on `Enumerable` or `IEnumerable<T>`
```cs
Enumerable.Range(int start, int end); // IEnumerable<int> of values between start & end
IEnumerable<TSource>.Select(Func<TSource, TResult> selector); // map
IEnumerable<TSource>.Where(Func<T, bool> predicate); // filter
IEnumerable<T>.FirstOrDefault(); // first element of IEnumerable or default(T) if empty
IEnumerable<T>.FirstOrDefault(T default); // specify returned default
IEnumerable<T>.FirstOrDefault(Func<T, bool> predicate); // first element to match predicate or default(T)
// same for LastOrDefault & SingleOrDefault
IEnumerable<T>.Chunk(size); // chunk an enumerable into slices of a fixed size
// T must implement IComparable<T>
IEnumerable<T>.Max();
IEnumerable<T>.Min();
// allow finding maximal or minimal elements using a key selector
IEnumerable<TSource>.MaxBy(Func<TSource, TResult> selector);
IEnumerable<TSource>.MinBy(Func<TSource, TResult> selector);
IEnumerable<T>.All(Func<T, bool> predicate); // check if condition is true for all elements
IEnumerable<T>.Any(Func<T, bool> predicate); // check if condition is true for at least one element
IEnumerable<T>.Concat(IEnumerable<T> enumerable);
// Applies a specified function to the corresponding elements of two sequences, producing a sequence of the results.
IEnumerable<TFirst>.Zip(IEnumerable<TSecond> enumerable, Func<TFirst, TSecond, TResult> func);
IEnumerable<TFirst>.Zip(IEnumerable<TSecond> enumerable); // Produces a sequence of tuples with elements from the two specified sequences.
```
**NOTE**: `Enumerable` provides a set of `static` methods for querying objects that implement `IEnumerable<T>`. Most methods are extensions of `IEnumerable<T>`
```cs
Enumerable.Method(IEnumerable<T> source, args);
// if extension method same as
IEnumerable<T>.Method(args);
```

View file

@ -0,0 +1,168 @@
# Reactive Extensions (Rx)
[ReactiveX](https://reactivex.io "ReactiveX website")
The **Reactive Extensions** for .NET, or **Rx**, are designed for working with asynchronous and event-based sources of information.
Rx provides services that help orchestrate and synchronize the way code reacts to data from these kinds of sources.
Rx's fundamental abstraction, `IObservable<T>`, represents a sequence of items, and its operators are defined as extension methods for this interface.
This might sound a lot like LINQ to Objects, and there are similarities, not only does `IObservable<T>` have a lot in common with `IEnumerable<T>`, but Rx also supports almost all of the standard LINQ operators.
The difference is that in Rx, sequences are less passive. Unlike `IEnumerable<T>`, Rx sources do not wait to be asked for their items, nor can the consumer
of an Rx source demand to be given the next item. Instead, Rx uses a *push* model in which *the source notifies* its recipients when items are available.
Because Rx implements standard LINQ operators, it's possible to write queries against a live source. Rx goes beyond standard LINQ, adding its own operators that take into account the temporal nature of a live event source.
## Fundamental Interfaces
The two most important types in Rx are the `IObservable<T>` and `IObserver<T>` interfaces.
They are important enough to be in the System namespace. The other parts of Rx are in the `System.Reactive` NuGet package.
```cs
public interface IObservable<out T>
{
IDisposable Subscribe(IObserver<T> observer);
}
public interface IObserver<in T>
{
void OnCompleted();
void OnError(Exception error);
void OnNext(T value);
}
```
The fundamental abstraction in Rx, `IObservable<T>`, is implemented by *event sources*. Instead of using the `event` keyword, it models events as a *sequence of items*.
An `IObservable<T>` provides items to subscribers as and when its ready to do so.
It's possible to subscribe to a source by passing an implementation of `IObserver<T>` to the `Subscribe` method.
The source will invoke `OnNext` when it wants to report events, and it can call `OnCompleted` to indicate that there will be no further activity.
If the source wants to report an error, it can call `OnError`.
Both `OnCompleted` and `OnError` indicate the end of the stream, an observable should not call any further methods on the observer after that.
## Operators
### Chaining Operators
Most operators operate on an Observable and return an Observable. This allows to apply these operators one after the other, in a chain.
Each operator in the chain modifies the Observable that results from the operation of the previous operator.
A chain of Observable operators do not operate independently on the original Observable that originates the chain,
but they operate *in turn*, each one operating on the Observable generated by the operator immediately previous in the chain.
### Creating Observables
Operators that originate new Observables.
- `Create`: create an Observable from scratch by calling observer methods programmatically
- `Defer`: do not create the Observable until the observer subscribes, and create a fresh Observable for each observer
- `Empty/Never/Throw`: create Observables that have very precise and limited behavior
- `From*`: convert some other object or data structure into an Observable
- `Interval`: create an Observable that emits a sequence of integers spaced by a particular time interval
- `Return (aka Just)`: convert an object or a set of objects into an Observable that emits that or those objects
- `Range`: create an Observable that emits a range of sequential integers
- `Repeat`: create an Observable that emits a particular item or sequence of items repeatedly
- `Start`: create an Observable that emits the return value of a function
- `Timer`: create an Observable that emits a single item after a given delay
### Transforming Observables
Operators that transform items that are emitted by an Observable.
- `Buffer`: periodically gather items from an Observable into bundles and emit these bundles rather than emitting the items one at a time
- `SelectMany (aka FlatMap)`: transform the items emitted by an Observable into Observables, then flatten the emissions from those into a single Observable
- `GroupBy`: divide an Observable into a set of Observables that each emit a different group of items from the original Observable, organized by key
- `Select (aka Map)`: transform the items emitted by an Observable by applying a function to each item
- `Scan`: apply a function to each item emitted by an Observable, sequentially, and emit each successive value
- `Window`: periodically subdivide items from an Observable into Observable windows and emit these windows rather than emitting the items one at a time
### Filtering Observables
Operators that selectively emit items from a source Observable.
- `Throttle (aka Debounce)`: only emit an item from an Observable if a particular timespan has passed without it emitting another item
- `Distinct`: suppress duplicate items emitted by an Observable
- `ElementAt`: emit only item n emitted by an Observable
- `Where (aka Filter)`: emit only those items from an Observable that pass a predicate test
- `First`: emit only the first item, or the first item that meets a condition, from an Observable
- `IgnoreElements`: do not emit any items from an Observable but mirror its termination notification
- `Last`: emit only the last item emitted by an Observable
- `Sample`: emit the most recent item emitted by an Observable within periodic time intervals
- `Skip`: suppress the first n items emitted by an Observable
- `SkipLast`: suppress the last n items emitted by an Observable
- `Take`: emit only the first n items emitted by an Observable
- `TakeLast`: emit only the last n items emitted by an Observable
### Combining Observables
Operators that work with multiple source Observables to create a single Observable
- `And/Then/When`: combine sets of items emitted by two or more Observables by means of Pattern and Plan intermediaries
- `CombineLatest`: when an item is emitted by either of two Observables, combine the latest item emitted by each Observable via a specified function and emit items based on the results of this function
- `Join`: combine items emitted by two Observables whenever an item from one Observable is emitted during a time window defined according to an item emitted by the other Observable
- `Merge`: combine multiple Observables into one by merging their emissions
- `StartWith`: emit a specified sequence of items before beginning to emit the items from the source Observable
- `Switch`: convert an Observable that emits Observables into a single Observable that emits the items emitted by the most-recently-emitted of those Observables
- `Zip`: combine the emissions of multiple Observables together via a specified function and emit single items for each combination based on the results of this function
### Error Handling Operators
Operators that help to recover from error notifications from an Observable
- `Catch`: recover from an onError notification by continuing the sequence without error
- `Retry`: if a source Observable sends an onError notification, resubscribe to it in the hopes that it will complete without error
### Observable Utility Operators
A toolbox of useful Operators for working with Observables
- `Delay`: shift the emissions from an Observable forward in time by a particular amount
- `Do`: register an action to take upon a variety of Observable lifecycle events
- `Materialize/Dematerialize`: represent both the items emitted and the notifications sent as emitted items, or reverse this process
- `ObserveOn`: specify the scheduler on which an observer will observe this Observable
- `Synchronize (aka Serialize)`: force an Observable to make serialized calls and to be well-behaved
- `Subscribe`: operate upon the emissions and notifications from an Observable
- `SubscribeOn`: specify the scheduler an Observable should use when it is subscribed to
- `TimeInterval`: convert an Observable that emits items into one that emits indications of the amount of time elapsed between those emissions
- `Timeout`: mirror the source Observable, but issue an error notification if a particular period of time elapses without any emitted items
- `Timestamp`: attach a timestamp to each item emitted by an Observable
- `Using`: create a disposable resource that has the same lifespan as the Observable
### Conditional and Boolean Operators
Operators that evaluate one or more Observables or items emitted by Observables
- `All`: determine whether all items emitted by an Observable meet some criteria
- `Amb`: given two or more source Observables, emit all of the items from only the first of these Observables to emit an item
- `Contains`: determine whether an Observable emits a particular item or not
- `DefaultIfEmpty`: emit items from the source Observable, or a default item if the source Observable emits nothing
- `SequenceEqual`: determine whether two Observables emit the same sequence of items
- `SkipUntil`: discard items emitted by an Observable until a second Observable emits an item
- `SkipWhile`: discard items emitted by an Observable until a specified condition becomes false
- `TakeUntil`: discard items emitted by an Observable after a second Observable emits an item or terminates
- `TakeWhile`: discard items emitted by an Observable after a specified condition becomes false
### Mathematical and Aggregate Operators
Operators that operate on the entire sequence of items emitted by an Observable
- `Average`: calculates the average of numbers emitted by an Observable and emits this average
- `Concat`: emit the emissions from two or more Observables without interleaving them
- `Count`: count the number of items emitted by the source Observable and emit only this value
- `Max`: determine, and emit, the maximum-valued item emitted by an Observable
- `Min`: determine, and emit, the minimum-valued item emitted by an Observable
- `Aggregate (aka Reduce)`: apply a function to each item emitted by an Observable, sequentially, and emit the final value
- `Sum`: calculate the sum of numbers emitted by an Observable and emit this sum
### Connectable Observable Operators
Specialty Observables that have more precisely-controlled subscription dynamics
- `Connect`: instruct a connectable Observable to begin emitting items to its subscribers
- `Publish`: convert an ordinary Observable into a connectable Observable
- `RefCount`: make a Connectable Observable behave like an ordinary Observable
- `Replay`: ensure that all observers see the same sequence of emitted items, even if they subscribe after the Observable has begun emitting items
### Operators to Convert Observables
- `To*`: convert an Observable into another object or data structure

View file

@ -0,0 +1,51 @@
# Unit Testing
[UnitTest Overloaded Methods](https://stackoverflow.com/a/5666591/8319610)
[Naming standards for unit tests](https://osherove.com/blog/2005/4/3/naming-standards-for-unit-tests.html)
## xUnit
```cs
using System;
using Xunit;
namespace Project.Tests
{
public class ClassTest
{
[Fact]
public void TestMethod()
{
Assert.Equal(expected, actual); // works on collections
Assert.True(bool);
Assert.False(bool);
Assert.NotNull(nullable);
// Verifies that all items in the collection pass when executed against action
Assert.All<T>(IEnumerable<T> collection, Action<T> action);
}
}
}
```
### Test Setup & Teardown
xUnit.net creates a new instance of the test class for every test that is run, so any code which is placed into the constructor of the test class will be run for every single test.
This makes the constructor a convenient place to put reusable context setup code.
For context cleanup, add the `IDisposable` interface to the test class, and put the cleanup code in the `Dispose()` method.
## Mocking with Moq
```cs
var mockObj = new Mock<MockedType>();
mockObj.Setup(m => m.Method(It.IsAny<InputType>())).Returns(value);
mockObj.Object; // get mock
// check that the invocation is forwarded to the mock, n times
mockObj.Verify(m => m.Method(It.IsAny<InputType>()), Times.Once());
// check that the invocation is forwarded to the mock with a specific input
mockObj.Verify(m => m.Method(input), Times.Once());
```

View file

@ -0,0 +1,244 @@
# ASP.NET Configuration
## `.csproj`
```xml
<PropertyGroup>
<!-- enable documentation comments (can be used for swagger) -->
<GenerateDocumentationFile>true</GenerateDocumentationFile>
<!-- do not warn public classes w/o documentation comments -->
<NoWarn>$(NoWarn);1591</NoWarn>
</PropertyGroup>
```
## `Program.cs`
```cs
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
namespace App
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run(); // start and config ASP.NET Core App
// or start Blazor WASM Single Page App
var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.RootComponents.Add<App>("#app");
builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) });
await builder.Build().RunAsync();
}
// for MVC, Razor Pages and Blazor Server
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>(); // config handled in Startup.cs
});
}
}
```
## `Startup.cs`
```cs
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.HttpsPolicy;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace App
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the DI container.
public void ConfigureServices(IServiceCollection services)
{
// set db context for the app using the connection string
services.AddDbContext<AppDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
// Captures synchronous and asynchronous Exception instances from the pipeline and generates HTML error responses.
services.AddDatabaseDeveloperPageExceptionFilter();
// use Razor Pages, runtime compilation needs Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation pkg
services.AddRazorPages().AddRazorRuntimeCompilation();
// or
services.AddControllers(); // controllers w/o views
//or
services.AddControllersWithViews(); // MVC Controllers
//or
services.AddServerSideBlazor(); // needs Razor Pages
services.AddSignalR();
// set dependency injection lifetimes
services.AddSingleton<ITransientService, ServiceImplementation>();
services.AddScoped<ITransientService, ServiceImplementation>();
services.AddTransient<ITransientService, ServiceImplementation>();
// add swagger
services.AddSwaggerGen(options => {
// OPTIONAL: use xml comments for swagger documentation
var xmlFilename = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
options.IncludeXmlComments(Path.Combine(AppContext.BaseDirectory, xmlFilename));
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseSwagger();
app.UseSwaggerUI();
app.UseEndpoints(endpoints =>
{
// MVC routing
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}"
);
// or
endpoints.MapControllers(); // map controllers w/o views
// or
endpoints.MapRazorPages();
// or
endpoints.MapBlazorHub(); // SignalR Hub for Blazor Server
endpoints.MapHub("/hub/endpoint"); // SignalR Hub
endpoints.MapFallbackToPage("/_Host"); // fallback for razor server
});
}
}
}
```
## Application Settings
App settings are loaded (in order) from:
1. `appsettings.json`
2. `appsettings.<Environment>.json`
3. User Secrets
The environment is controlled by the env var `ASPNETCORE_ENVIRONMENT`. If a setting is present in multiple locations, the last one is used and overrides the previous ones.
### User Secrets
User secrets are specific to each machine and can be initialized with `dotnet user-secrets init`. Each application is linked with it's settings by a guid.
The settings are stored in:
- `%APPDATA%\Microsoft\UserSecrets\<user_secrets_id>\secrets.json` (Windows)
- `~/.microsoft/usersecrets/<user_secrets_id>/secrets.json` (Linux/macOS)
Setting a value is done with `dotnet user-secrets set <key> <value>`, keys can be nested by separating each level with `:` or `__`.
## Options Pattern
The *options pattern* uses classes to provide strongly-typed access to groups of related settings.
```json
{
"SecretKey": "Secret key value",
"TransientFaultHandlingOptions": {
"Enabled": true,
"AutoRetryDelay": "00:00:07"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}
```
```cs
// options model for binding
public class TransientFaultHandlingOptions
{
public bool Enabled { get; set; }
public TimeSpan AutoRetryDelay { get; set; }
}
```
```cs
// setup the options
builder.Services.Configure<TransientFaultHandlingOptions>(builder.Configuration.GetSection<TransientFaultHandlingOptions>(nameof(Options)));
builder.Services.Configure<TransientFaultHandlingOptions>(builder.Configuration.GetSection<TransientFaultHandlingOptions>(key));
```
```cs
class DependsOnOptions
{
private readonly IOptions<TransientFaultHandlingOptions> _options;
public DependsOnOptions(IOptions<TransientFaultHandlingOptions> options) => _options = options;
}
```
### [Options interfaces](https://docs.microsoft.com/en-us/dotnet/core/extensions/options#options-interfaces)
`IOptions<TOptions>`:
- Does not support:
- Reading of configuration data after the app has started.
- Named options
- Is registered as a Singleton and can be injected into any service lifetime.
`IOptionsSnapshot<TOptions>`:
- Is useful in scenarios where options should be recomputed on every injection resolution, in scoped or transient lifetimes.
- Is registered as Scoped and therefore cannot be injected into a Singleton service.
- Supports named options
`IOptionsMonitor<TOptions>`:
- Is used to retrieve options and manage options notifications for `TOptions` instances.
- Is registered as a Singleton and can be injected into any service lifetime.
- Supports:
- Change notifications
- Named options
- Reloadable configuration
- Selective options invalidation (`IOptionsMonitorCache<TOptions>`)

View file

@ -0,0 +1,416 @@
# Blazor
Blazor apps are based on *components*. A **component** in Blazor is an element of UI, such as a page, dialog, or data entry form.
Components are .NET C# classes built into .NET assemblies that:
- Define flexible UI rendering logic.
- Handle user events.
- Can be nested and reused.
- Can be shared and distributed as Razor class libraries or NuGet packages.
![Blazor Server Architecture](../../img/dotnet_blazor-server.png)
![Blazor WASM Architecture](../../img/dotnet_blazor-webassembly.png)
The component class is usually written in the form of a Razor markup page with a `.razor` file extension. Components in Blazor are formally referred to as *Razor components*.
## Project Structure & Important Files
### Blazor Server Project Structure
```txt
Project
|-Properties
| |- launchSettings.json
|
|-wwwroot --> static files
| |-css
| | |- site.css
| | |- bootstrap
| |
| |- favicon.ico
|
|-Pages
| |- _Host.cshtml --> fallback page
| |- Component.razor
| |- Index.razor
| |- ...
|
|-Shared
| |- MainLayout.razor
| |- MainLayout.razor.css
| |- ...
|
|- _Imports.razor --> @using imports
|- App.razor --> component root of the app
|
|- appsettings.json --> application settings
|- Program.cs --> App entry-point
|- Startup.cs --> services and middleware configs
```
### Blazor WASM Project Structure
```txt
Project
|-Properties
| |- launchSettings.json
|
|-wwwroot --> static files
| |-css
| | |- site.css
| | |- bootstrap
| |
| |- index.html
| |- favicon.ico
|
|-Pages
| |- Component.razor
| |- Index.razor
| |- ...
|
|-Shared
| |- MainLayout.razor
| |- MainLayout.razor.css
| |- ...
|
|- _Imports.razor --> @using imports
|- App.razor --> component root of the app
|
|- appsettings.json --> application settings
|- Program.cs --> App entry-point
```
### Blazor PWA Project Structure
```txt
Project
|-Properties
| |- launchSettings.json
|
|-wwwroot --> static files
| |-css
| | |- site.css
| | |- bootstrap
| |
| |- index.html
| |- favicon.ico
| |- manifest.json
| |- service-worker.js
| |- icon-512.png
|
|-Pages
| |- Component.razor
| |- Index.razor
| |- ...
|
|-Shared
| |- MainLayout.razor
| |- MainLayout.razor.css
| |- ...
|
|- _Imports.razor --> @using imports
|- App.razor --> component root of the app
|
|- appsettings.json --> application settings
|- Program.cs --> App entrypoint
```
### `manifest.json`, `service-worker.js` (Blazor PWA)
[PWA](https://web.dev/progressive-web-apps/)
[PWA MDN Docs](https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps)
[PWA Manifest](https://developer.mozilla.org/en-US/docs/Web/Manifest)
[Service Worker API](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API)
```json
// manifest.json
{
"name": "<App Name>",
"short_name": "<Short App Name>",
"start_url": "./",
"display": "standalone",
"background_color": "#ffffff",
"theme_color": "#03173d",
"icons": [
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}
]
}
```
## Common Blazor Files
### `App.razor`
```cs
<Router AppAssembly="@typeof(Program).Assembly" PreferExactMatches="@true">
<Found Context="routeData">
<RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" />
</Found>
<NotFound>
<LayoutView Layout="@typeof(MainLayout)">
<p>Sorry, there's nothing at this address.</p>
</LayoutView>
</NotFound>
</Router>
```
### `MainLayout.razor` (Blazor Server/WASM)
```cs
@inherits LayoutComponentBase
<div class="page">
<div class="sidebar">
<NavMenu /> // NavMenu Component
</div>
<div class="main">
<div class="top-row px-4">
</div>
<div class="content px-4">
@Body
</div>
</div>
</div>
```
### `_Host.cshtml` (Blazor Server)
```html
@page "/"
@namespace BlazorServerDemo.Pages
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@{
Layout = null;
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>BlazorServerDemo</title>
<base href="~/" />
<link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" />
<link href="css/site.css" rel="stylesheet" />
<link href="BlazorServerDemo.styles.css" rel="stylesheet" />
</head>
<body>
<component type="typeof(App)" render-mode="ServerPrerendered" />
<div id="blazor-error-ui">
<environment include="Staging,Production">
An error has occurred. This application may no longer respond until reloaded.
</environment>
<environment include="Development">
An unhandled exception has occurred. See browser dev tools for details.
</environment>
<a href="" class="reload">Reload</a>
<a class="dismiss">🗙</a>
</div>
<script src="_framework/blazor.server.js"></script>
</body>
</html>
```
## Components (`.razor`)
[Blazor Components](https://docs.microsoft.com/en-us/aspnet/core/blazor/components/)
```cs
@page "/route/{RouteParameter}" // make component accessible from a URL
@page "/route/{RouteParameter?}" // specify route parameter as optional
@page "/route/{RouteParameter:<type>}" // specify route parameter type
@namespace <Namespace> // set the component namespace
@using <Namespace> // using statement
@inherits BaseType // inheritance
@attribute [Attribute] // apply an attribute
@inject Type objectName // dependency injection
// html of the page here
<Namespace.ComponentFolder.Component /> // access component w/o @using
<Component Property="value"/> // insert component into page, passing attributes
<Component @onclick="@CallbackMethod">
@ChildContent // segment of UI content
</Component>
@code {
// component model (Properties, Methods, ...)
[Parameter] // capture attribute
public Type Property { get; set; } = defaultValue;
[Parameter] // capture route parameters
public type RouteParameter { get; set;}
[Parameter] // segment of UI content
public RenderFragment ChildContent { get; set;}
private void CallbackMethod() { }
}
```
## State Management
### Blazor WASM
```cs
// setup state singleton
builder.Services.AddSingleton<StateContainer>();
```
```cs
// StateContainer singleton
using System;
public class StateContainer
{
private int _counter;
public string Property
{
get => _counter;
set
{
_counter = value;
NotifyStateChanged(); // will trigger StateHasChanged(), causing a render
}
}
public event Action OnChange;
private void NotifyStateChanged() => OnChange?.Invoke();
}
```
```cs
// component that changes the state
@inject StateContainer State
// Delegate event handlers automatically trigger a UI render
<button @onClick="@HandleClick">
Change State
</button>
@code {
private void HandleClick()
{
State.Property += 1; // update state
}
}
```
```cs
// component that should be update on state change
@implements IDisposable
@inject StateContainer State
<p>Property: <b>@State.Property</b></p>
@code {
// StateHasChanged notifies the component that its state has changed.
// When applicable, calling StateHasChanged causes the component to be rerendered.
protected override void OnInitialized()
{
State.OnChange += StateHasChanged;
}
public void Dispose()
{
State.OnChange -= StateHasChanged;
}
}
```
## Data Binding & Events
```cs
<p>
<button @on{DOM EVENT}="{DELEGATE}" />
<button @on{DOM EVENT}="{DELEGATE}" @on{DOM EVENT}:preventDefault /> // prevent default action
<button @on{DOM EVENT}="{DELEGATE}" @on{DOM EVENT}:preventDefault="{CONDITION}" /> // prevent default action if CONDITION is true
<button @on{DOM EVENT}="{DELEGATE}" @on{DOM EVENT}:stopPropagation />
<button @on{DOM EVENT}="{DELEGATE}" @on{DOM EVENT}:stopPropagation="{CONDITION}" /> // stop event propagation if CONDITION is true
<button @on{DOM EVENT}="@(e => Property = value)" /> // change internal state w/ lambda
<button @on{DOM EVENT}="@(e => DelegateAsync(e, value))" /> // invoke delegate w/ lambda
<input @ref="elementReference" />
<input @bind="{PROPERTY}" /> // updates variable on ONCHANGE event (focus loss)
<input @bind="{PROPERTY}" @bind:event="{DOM EVENT}" /> // updates value on DOM EVENT
<input @bind="{PROPERTY}" @bind:format="{FORMAT STRING}" /> // use FORMAT STRING to display value
<ChildComponent @bind-{PROPERTY}="{PROPERTY}" @bind-{PROPERTY}:event="{EVENT}" /> // bind to child component {PROPERTY}
<ChildComponent @bind-{PROPERTY}="{PROPERTY}" @bind-{PROPERTY}:event="{PROPERTY}Changed" /> // bind to child component {PROPERTY}, listen for custom event
</p>
@code {
private ElementReference elementReference;
public string Property { get; set; }
public EventCallback<Type> PropertyChanged { get; set; } // custom event {PROPERTY}Changed
// invoke custom event
public async Task DelegateAsync(EventArgs e, Type argument)
{
/* ... */
await PropertyChanged.InvokeAsync(e, argument); // notify parent bound prop has changed
await elementReference.FocusAsync(); // focus an element in code
}
}
```
**NOTE**: When a user provides an unparsable value to a data-bound element, the unparsable value is automatically reverted to its previous value when the bind event is triggered.
## Javascript/.NET Interop
[Call Javascript from .NET](https://docs.microsoft.com/en-us/aspnet/core/blazor/call-javascript-from-dotnet)
[Call .NET from Javascript](https://docs.microsoft.com/en-us/aspnet/core/blazor/call-dotnet-from-javascript)
### Render Blazor components from JavaScript [C# 10]
To render a Blazor component from JavaScript, first register it as a root component for JavaScript rendering and assign it an identifier:
```cs
// Blazor Server
builder.Services.AddServerSideBlazor(options =>
{
options.RootComponents.RegisterForJavaScript<Counter>(identifier: "counter");
});
// Blazor WebAssembly
builder.RootComponents.RegisterForJavaScript<Counter>(identifier: "counter");
```
Load Blazor into the JavaScript app (`blazor.server.js` or `blazor.webassembly.js`) and then render the component from JavaScript into a container element using the registered identifier, passing component parameters as needed:
```js
let containerElement = document.getElementById('my-counter');
await Blazor.rootComponents.add(containerElement, 'counter', { incrementAmount: 10 });
```
### Blazor custom elements [C# 10]
Experimental support is also now available for building custom elements with Blazor using the Microsoft.AspNetCore.Components.CustomElements NuGet package.
Custom elements use standard HTML interfaces to implement custom HTML elements.
To create a custom element using Blazor, register a Blazor root component as custom elements like this:
```cs
options.RootComponents.RegisterAsCustomElement<Counter>("my-counter");
```

View file

@ -0,0 +1,132 @@
# [Filters](https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/filters)
**Filters** in ASP.NET Core allow code to be run _before_ or _after_ specific stages in the request processing pipeline.
Built-in filters handle tasks such as:
- Authorization (preventing access to resources a user isn't authorized for).
- Response caching (short-circuiting the request pipeline to return a cached response).
Custom filters can be created to handle cross-cutting concerns. Examples of cross-cutting concerns include error handling, caching, configuration, authorization, and logging. Filters avoid duplicating code.
## **How filters work**
Filters run within the _ASP.NET Core action invocation pipeline_, sometimes referred to as the _filter pipeline_. The filter pipeline runs after ASP.NET Core selects the action to execute.
![filter-pipeline-1](../../img/dotnet_filter-pipeline-1.png)
![filter-pipeline-2](../../img/dotnet_filter-pipeline-2.png)
## **Filter types**
Each filter type is executed at a different stage in the filter pipeline:
- **Authorization filters** run first and are used to determine whether the user is authorized for the request. Authorization filters short-circuit the pipeline if the request is not authorized.
- **Resource filters**:
- Run after authorization.
- `OnResourceExecuting` runs code before the rest of the filter pipeline. For example, `OnResourceExecuting` runs code before model binding.
- `OnResourceExecuted` runs code after the rest of the pipeline has completed.
- **Action filters**:
- Run code immediately before and after an action method is called.
- Can change the arguments passed into an action.
- Can change the result returned from the action.
- Are **not** supported in Razor Pages.
- **Exception filters** apply global policies to unhandled exceptions that occur before the response body has been written to.
- **Result filters** run code immediately before and after the execution of action results. They run only when the action method has executed successfully. They are useful for logic that must surround view or formatter execution.
## **Implementation**
Filters support both synchronous and asynchronous implementations through different interface definitions.
For example, `OnActionExecuting` is called before the action method is called. `OnActionExecuted` is called after the action method returns.
Asynchronous filters define an `On-Stage-ExecutionAsync` method, for example `OnActionExecutionAsync`.
Interfaces for multiple filter stages can be implemented in a single class.
## **Built-in filter attributes**
ASP.NET Core includes built-in _attribute-based_ filters that can be subclassed and customized.
Several of the filter interfaces have corresponding attributes that can be used as base classes for custom implementations.
Filter attributes:
- [ActionFilterAttribute](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.filters.actionfilterattribute)
- [ExceptionFilterAttribute](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.filters.exceptionfilterattribute)
- [ResultFilterAttribute](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.filters.resultfilterattribute)
- [FormatFilterAttribute](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.formatfilterattribute)
- [ServiceFilterAttribute](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.servicefilterattribute)
- [TypeFilterAttribute](https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.typefilterattribute)
## **Filter scopes**
A filter can be added to the pipeline at one of three *scopes*:
- Using an attribute on a controller action. Filter attributes cannot be applied to Razor Pages handler methods.
```cs
// services.AddScoped<CustomActionFilterAttribute>();
[ServiceFilter(typeof(CustomActionFilterAttribute))]
public IActionResult Index()
{
return Content("Header values by configuration.");
}
```
- Using an attribute on a controller or Razor Page.
```cs
// services.AddControllersWithViews(options => { options.Filters.Add(new CustomResponseFilterAttribute(args)); });
[CustomResponseFilterAttribute(args)]
public class SampleController : Controller
// or
[CustomResponseFilterAttribute(args)]
[ServiceFilter(typeof(CustomActionFilterAttribute))]
public class IndexModel : PageModel
```
- Globally for all controllers, actions, and Razor Pages.
```cs
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews(options =>
{
options.Filters.Add(typeof(CustomActionFilter));
});
}
```
## Filter Order of Execution
When there are multiple filters for a particular stage of the pipeline, scope determines the default order of filter execution. Global filters surround class filters, which in turn surround method filters.
As a result of filter nesting, the *after* code of filters runs in the reverse order of the *before* code. The filter sequence:
- The *before* code of global filters.
- The *before* code of controller and Razor Page filters.
- The *before* code of action method filters.
- The *after* code of action method filters.
- The *after* code of controller and Razor Page filters.
- The *after* code of global filters.
### Cancellation and Short-Circuiting
The filter pipeline can be short-circuited by setting the `Result` property on the `ResourceExecutingContext` parameter provided to the filter method.
```cs
public class ShortCircuitingResourceFilterAttribute : Attribute, IResourceFilter
{
public void OnResourceExecuting(ResourceExecutingContext context)
{
context.Result = new ContentResult()
{
Content = "Resource unavailable - header not set."
};
}
public void OnResourceExecuted(ResourceExecutedContext context)
{
}
}
```

View file

@ -0,0 +1,207 @@
# [Middleware](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware)
Middleware is software that's assembled into an app pipeline to handle requests and responses. Each component:
- Chooses whether to pass the request to the next component in the pipeline.
- Can perform work before and after the next component in the pipeline.
Request delegates are used to build the request pipeline. The request delegates handle each HTTP request.
Request delegates are configured using [Run][Run_docs], [Map][Map_docs], and [Use][Use_docs] extension methods.
An individual request delegate can be specified in-line as an anonymous method (called in-line middleware), or it can be defined in a reusable class.
These reusable classes and in-line anonymous methods are *middleware*, also called *middleware components*.
Each middleware component in the request pipeline is responsible for invoking the next component in the pipeline or short-circuiting the pipeline.
When a middleware short-circuits, it's called a *terminal middleware* because it prevents further middleware from processing the request.
[Use_docs]: https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.useextensions.use
[Run_docs]: https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.runextensions.run
[Map_docs]: https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.mapextensions.map
## Middleware Pipeline
The ASP.NET Core request pipeline consists of a sequence of request delegates, called one after the other.
![request-delegate-pipeline](../../img/dotnet_request-delegate-pipeline.png)
Each delegate can perform operations before and after the next delegate. Exception-handling delegates should be called early in the pipeline, so they can catch exceptions that occur in later stages of the pipeline. It's possible to chain multiple request delegates together with `Use`.
The *next* parameter represents the next delegate in the pipeline. It's possible to short-circuit the pipeline by *not calling* the next parameter.
When a delegate doesn't pass a request to the next delegate, it's called *short-circuiting the request pipeline*.
Short-circuiting is often desirable because it avoids unnecessary work.
It's possible to perform actions both *before* and *after* the next delegate:
```cs
public class Startup
{
public void Configure(IApplicationBuilder app)
{
// "inline" middleware, best if in own class
app.Use(async (context, next) =>
{
// Do work that doesn't write to the Response.
await next.Invoke();
// Do logging or other work that doesn't write to the Response.
});
}
}
```
`Run` delegates don't receive a next parameter. The first `Run` delegate is always terminal and terminates the pipeline.
```cs
public class Startup
{
public void Configure(IApplicationBuilder app)
{
// "inline" middleware, best if in own class
app.Use(async (context, next) =>
{
// Do work that doesn't write to the Response.
await next.Invoke();
// Do logging or other work that doesn't write to the Response.
});
app.Run(async context =>
{
// no invocation of next
});
}
}
```
## Middleware Order
![middleware-pipeline](../../img/dotnet_middleware-pipeline.png)
![mvc-endpoint](../../img/dotnet_mvc-endpoint.png)
The Endpoint middleware executes the filter pipeline for the corresponding app type.
The order that middleware components are added in the `Startup.Configure` method defines the order in which the middleware components are invoked on requests and the reverse order for the response. The order is **critical** for security, performance, and functionality.
```cs
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
// app.UseCookiePolicy();
app.UseRouting();
// app.UseRequestLocalization();
// app.UseCors();
app.UseAuthentication();
app.UseAuthorization();
// app.UseSession();
// app.UseResponseCompression();
// app.UseResponseCaching();
app.UseEndpoints(endpoints =>
{
endpoints.MapRazorPages();
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
}
```
[Built-in Middleware](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/#built-in-middleware)
## Branching the Middleware Pipeline
`Map` extensions are used as a convention for branching the pipeline. `Map` branches the request pipeline based on matches of the given request path.
If the request path starts with the given path, the branch is executed.
When `Map` is used, the matched path segments are removed from `HttpRequest.Path` and appended to `HttpRequest.PathBase` for each request.
`MapWhen` branches the request pipeline based on the result of the given predicate.
Any *predicate* of type `Func<HttpContext, bool>` can be used to map requests to a new branch of the pipeline.
`UseWhen` also branches the request pipeline based on the result of the given predicate.
Unlike with `MapWhen`, this branch is rejoined to the main pipeline if it doesn't short-circuit or contain a terminal middleware.
## Custom Middleware Classes
Middleware is generally encapsulated in a class and exposed with an extension method.
```cs
using Microsoft.AspNetCore.Http;
using System.Globalization;
using System.Threading.Tasks;
namespace <App>
{
public class CustomMiddleware
{
private readonly RequestDelegate _next;
public CustomMiddleware(RequestDelegate next)
{
_next = next;
}
public async Task InvokeAsync(HttpContext context)
{
// Do work that doesn't write to the Response.
await _next(context); // Call the next delegate/middleware in the pipeline
// Do logging or other work that doesn't write to the Response.
}
}
}
```
The middleware class **must** include:
- A public constructor with a parameter of type [RequestDelegate][RequestDelegate_docs].
- A public method named `Invoke` or `InvokeAsync`. This method must:
- Return a `Task`.
- Accept a first parameter of type [HttpContext][HttpConrext_Docs].
[RequestDelegate_docs]: https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.requestdelegate
[HttpConrext_Docs]: https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.httpcontext
## Middleware Extension Methods
```cs
using Microsoft.AspNetCore.Builder;
namespace <App>
{
public static class MiddlewareExtensions
{
public static IApplicationBuilder UseCustom(this IApplicationBuilder builder)
{
return builder.UseMiddleware<CustomMiddleware>();
}
}
}
```
```cs
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// other middlewares
app.UseCustom(); // add custom middleware in the pipeline
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
```

View file

@ -0,0 +1,252 @@
# Minimal API
**NOTE**: Requires .NET 6+
```cs
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<IService, Service>();
builder.Services.AddScoped<IService, Service>();
builder.Services.AddTransient<IService, Service>();
var app = builder.Build();
// [...]
app.Run();
//or
app.RunAsync();
```
## Swagger
```cs
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
// [...]
app.UseSwagger();
app.UseSwaggerUI();
// add returned content metadata to Swagger
app.MapGet("/route", Handler).Produces<Type>(statusCode);
// add request body contents metadata to Swagger
app.MapPost("/route", Handler).Accepts<Type>(contentType);
```
## MVC
```cs
builder.Services.AddControllersWithViews();
//or
builder.Services.AddControllers();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
```
## Routing, Handlers & Results
To define routes and handlers using Minimal APIs, use the `Map(Get|Post|Put|Delete)` methods.
```cs
// the dependencies are passed as parameters in the handler delegate
app.MapGet("/route/{id}", (IService service, int id) => {
return entity is not null ? Results.Ok(entity) : Results.NotFound();
});
// pass delegate to use default values
app.MapGet("/search/{id}", Search);
IResult Search(int id, int? page = 1, int? pageSize = 10) { /* ... */ }
```
### Route Groups
The `MapGroup()` extension method, which helps organize groups of endpoints with a common prefix.
It allows for customizing entire groups of endpoints with a singe call to methods like `RequireAuthorization()` and `WithMetadata()`.
```cs
var group = app.MapGroup("<route-prefix>");
group.MapGet("/", GetAllTodos); // route: /<route-prefix>
group.MapGet("/{id}", GetTodo); // route: /<route-prefix>/{id}
// [...]
```
### `TypedResults`
The `Microsoft.AspNetCore.Http.TypedResults` static class is the “typed” equivalent of the existing `Microsoft.AspNetCore.Http.Results` class.
It's possible to use `TypedResults` in minimal APIs to create instances of the in-framework `IResult`-implementing types and preserve the concrete type information.
```cs
public static async Task<IResult> GetAllTodos(TodoDb db)
{
return TypedResults.Ok(await db.Todos.ToArrayAsync());
}
```
```cs
[Fact]
public async Task GetAllTodos_ReturnsOkOfObjectResult()
{
// Arrange
var db = CreateDbContext();
// Act
var result = await TodosApi.GetAllTodos(db);
// Assert: Check the returned result type is correct
Assert.IsType<Ok<Todo[]>>(result);
}
```
### Multiple Result Types
The `Results<TResult1, TResult2, TResultN>` generic union types, along with the `TypesResults` class, can be used to declare that a route handler returns multiple `IResult`-implementing concrete types.
```cs
// Declare that the lambda returns multiple IResult types
app.MapGet("/todos/{id}", async Results<Ok<Todo>, NotFound> (int id, TodoDb db)
{
return await db.Todos.FindAsync(id) is Todo todo
? TypedResults.Ok(todo)
: TypedResults.NotFound();
});
```
## Filters
```cs
public class ExampleFilter : IRouteHandlerFilter
{
public async ValueTask<object?> InvokeAsync(RouteHandlerInvocationContext context, RouteHandlerFilterDelegate next)
{
// before endpoint call
var result = next(context);
/// after endpoint call
return result;
}
}
```
```cs
app.MapPost("/route", Handler).AddFilter<ExampleFilter>();
```
## Context
With Minimal APIs it's possible to access the contextual information by passing one of the following types as a parameter to your handler delegate:
- `HttpContext`
- `HttpRequest`
- `HttpResponse`
- `ClaimsPrincipal`
- `CancellationToken` (RequestAborted)
```cs
app.MapGet("/hello", (ClaimsPrincipal user) => {
return "Hello " + user.FindFirstValue("sub");
});
```
## OpenAPI
The `Microsoft.AspNetCore.OpenApi` package exposes a `WithOpenApi` extension method that generates an `OpenApiOperation` derived from a given endpoints route handler and metadata.
```cs
app.MapGet("/todos/{id}", (int id) => ...)
.WithOpenApi();
app.MapGet("/todos/{id}", (int id) => ...)
.WithOpenApi(operation => {
operation.Summary = "Retrieve a Todo given its ID";
operation.Parameters[0].AllowEmptyValue = false;
});
```
## Validation
Using [Minimal Validation](https://github.com/DamianEdwards/MinimalValidation) by Damian Edwards.
Alternatively it's possible to use [Fluent Validation](https://fluentvalidation.net/).
```cs
app.MapPost("/widgets", (Widget widget) => {
var isValid = MinimalValidation.TryValidate(widget, out var errors);
if(isValid)
{
return Results.Created($"/widgets/{widget.Name}", widget);
}
return Results.BadRequest(errors);
});
class Widget
{
[Required, MinLength(3)]
public string? Name { get; set; }
public override string? ToString() => Name;
}
```
## JSON Serialization
```cs
// Microsoft.AspNetCore.Http.Json.JsonOptions
builder.Services.Configure<JsonOptions>(opt =>
{
opt.SerializerOptions.PropertyNamingPolicy = new SnakeCaseNamingPolicy();
});
```
## Authorization
```cs
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer();
builder.Services.AddAuthorization();
// or
builder.Services.AddAuthorization(options =>
{
// for all endpoints
options.FallbackPolicy = new AuthorizationPolicyBuilder()
.AddAuthenticationSchemes(JwtBearerDefaults.AuthenticationScheme)
.RequireAuthenticatedUser();
})
// [...]
app.UseAuthentication();
app.UseAuthorization(); // must come before routes
// [...]
app.MapGet("/alcohol", () => Results.Ok()).RequireAuthorization("<policy>"); // on specific endpoints
app.MapGet("/free-for-all", () => Results.Ok()).AllowAnonymous();
```

259
docs/dotnet/asp.net/mvc.md Normal file
View file

@ -0,0 +1,259 @@
# ASP.NET (Core) MVC Web App
## Project Structure
```txt
Project
|-Properties
| |- launchSettings.json
|
|-wwwroot --> location of static files
| |-css
| | |- site.css
| |
| |-js
| | |- site.js
| |
| |-lib
| | |- bootstrap
| | |- jquery
| | |- ...
| |
| |- favicon.ico
|
|-Model
| |-ErrorViewModel.cs
| |- Index.cs
| |-...
|
|-Views
| |-Home
| | |- Index.cshtml
| |
| |-Shared
| | |- _Layout.cshtml --> reusable default page layout
| | |- _ValidationScriptsPartial --> jquery validation script imports
| |
| |- _ViewImports.cshtml --> shared imports and tag helpers for all views
| |- _ViewStart.cshtml --> shared values for all views
| |- ...
|
|-Controllers
| |-HomeController.cs
|
|- appsettings.json
|- Program.cs --> App entry-point
|- Startup.cs --> App config
```
**Note**: `_` prefix indicates page to be imported.
## Controllers
```cs
using Microsoft.AspNetCore.Mvc;
using App.Models;
using System.Collections.Generic;
namespace App.Controllers
{
public class CategoryController : Controller
{
private readonly AppDbContext _db;
// get db context through dependency injection
public CategoryController(AppDbContext db)
{
_db = db;
}
// GET /Controller/Index
public IActionResult Index()
{
IEnumerable<Entity> entities = _db.Entities;
return View(Entities); // pass data to the @model
}
// GET /Controller/Create
public IActionResult Create()
{
return View();
}
// POST /Controller/Create
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult Create(Entity entity) // receive data from the @model
{
_db.Entities.Add(entity);
_db.SaveChanges();
return RedirectToAction("Index"); // redirection
}
// GET - /Controller/Edit
public IActionResult Edit(int? id)
{
if(id == null || id == 0)
{
return NotFound();
}
Entity entity = _db.Entities.Find(id);
if (entity == null)
{
return NotFound();
}
return View(entity); // return populated form for updating
}
// POST /Controller/Edit
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult Edit(Entity entity)
{
if (ModelState.IsValid) // all rules in model have been met
{
_db.Entities.Update(entity);
_db.SaveChanges();
return RedirectToAction("Index");
}
return View(entity);
}
// GET /controller/Delete
public IActionResult Delete(int? id)
{
if (id == null || id == 0)
{
return NotFound();
}
Entity entity = _db.Entities.Find(id);
if (entity == null)
{
return NotFound();
}
return View(entity); // return populated form for confirmation
}
// POST /Controller/Delete
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult Delete(Entity entity)
{
if (ModelState.IsValid) // all rules in model have been met
{
_db.Entities.Remove(entity);
_db.SaveChanges();
return RedirectToAction("Index");
}
return View(entity);
}
}
}
```
## Data Validation
### Model Annotations
In `Entity.cs`:
```cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.ComponentModel.DataAnnotations;
using System.Linq;
namespace App.Models
{
public class Entity
{
[DisplayName("Integer Number")]
[Required]
[Range(1, int.MaxValue, ErrorMessage = "Error Message")]
public int IntProp { get; set; }
}
}
```
### Tag Helpers & Client Side Validation
In `View.cshtml`;
```cs
<form method="post" asp-action="Create">
<div asp-validation-summary="ModelOnly" class="text-danger"></div>
<div class="form-group row">
<div class="col-4">
<label asp-for="IntProp"></label>
</div>
<div class="col-8">
<input asp-for="IntProp" class="form-control"/>
<span asp-validation-for="IntProp" class="text-danger"></span> // error message displayed here
</div>
</div>
</form>
// client side validation
@section Scripts{
@{ <partial name="_ValidationScriptsPartial" /> }
}
```
### Server Side Validation
```cs
using Microsoft.AspNetCore.Mvc;
using App.Models;
using System.Collections.Generic;
namespace App.Controllers
{
public class CategoryController : Controller
{
private readonly AppDbContext _db;
// get db context through dependency injection
public CategoryController(AppDbContext db)
{
_db = db;
}
// GET /Controller/Index
public IActionResult Index()
{
IEnumerable<Entity> entities = _db.Entities;
return View(Entities); // pass data to the @model
}
// GET /Controller/Create
public IActionResult Create()
{
return View();
}
// POST /Controller/Create
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult Create(Entity entity) // receive data from the @model
{
if (ModelState.IsValid) // all rules in model have been met
{
_db.Entities.Add(entity);
_db.SaveChanges();
return RedirectToAction("Index");
}
return View(entity); // return model and display error messages
}
}
}
```

View file

@ -0,0 +1,236 @@
# Razor Pages
## Project Structure
```txt
Project
|-Properties
| |- launchSettings.json
|
|-wwwroot --> static files
| |-css
| | |- site.css
| |
| |-js
| | |- site.js
| |
| |-lib
| | |- jquery
| | |- bootstrap
| | |- ...
| |
| |- favicon.ico
|
|-Pages
| |-Shared
| | |- _Layout.cshtml --> reusable default page layout
| | |- _ValidationScriptsPartial --> jquery validation script imports
| |
| |- _ViewImports.cshtml --> shared imports and tag helpers for all views
| |- _ViewStart.cshtml --> shared values for all views
| |- Index.cshtml
| |- Index.cshtml.cs
| |- ...
|
|- appsettings.json --> application settings
|- Program.cs --> App entry-point
|- Startup.cs
```
**Note**: `_` prefix indicates page to be imported
Razor Pages components:
- Razor Page (UI/View - `.cshtml`)
- Page Model (Handlers - `.cshtml.cs`)
in `Index.cshtml`:
```cs
@page // mark as Razor Page
@model IndexModel // Link Page Model
@{
ViewData["Title"] = "Page Title" // same as <title>Page Title</title>
}
// body contents
```
in `Page.cshtml.cs`:
```cs
using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.Extensions.Logging;
namespace App.Pages
{
public class IndexModel : PageModel
{
// HTTP Method
public void OnGet() { }
// HTTP Method
public void OnPost() { }
}
}
```
## Razor Page
```cs
using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.Extensions.Logging;
namespace App.Pages
{
public class IndexModel : PageModel
{
private readonly ApplicationDbContext _db; // EF DB Context
// Get DBContext through DI
public IndexModel(ApplicationDbContext db)
{
_db = db;
}
[BindProperty] // assumed to be received on POST
public IEnumerable<Entity> Entities { get; set; }
// HTTP Method Handler
public async Task OnGet()
{
// get data from DB (example operation)
Entities = await _db.Entities.ToListAsync();
}
// HTTP Method Handler
public async Task<IActionResult> OnPost()
{
if (ModelState.IsValid)
{
// save to DB (example operation)
await _db.Entities.AddAsync(Entity);
await _db.SaveChangesAsync();
return RedirectToPage("Index");
}
else
{
return Page();
}
}
}
}
```
## Routing
Rules:
- URL maps to a physical file on disk
- Razor paged needs a root folder (Default "Pages")
- file extension not included in URL
- `Index.cshtml` is entry-point and default document (missing file in URL redirects to index)
| URL | Maps TO |
|------------------------|----------------------------------------------------|
| www.domain.com | /Pages/Index.cshtml |
| www.domain.com/Index | /Pages/Index.html |
| www.domain.com/Account | /Pages/Account.cshtml, /Pages/Account/Index.cshtml |
## Data Validation
### Model Annotations
In `Entity.cs`:
```cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.ComponentModel.DataAnnotations;
using System.Linq;
namespace App.Models
{
public class Entity
{
[DisplayName("Integer Number")]
[Required]
[Range(1, int.MaxValue, ErrorMessage = "Error Message")]
public int IntProp { get; set; }
}
}
```
### Tag Helpers & Client Side Validation
In `View.cshtml`;
```cs
<form method="post" asp-action="Create">
<div asp-validation-summary="ModelOnly" class="text-danger"></div>
<div class="form-group row">
<div class="col-4">
<label asp-for="IntProp"></label>
</div>
<div class="col-8">
<input asp-for="IntProp" class="form-control"/>
<span asp-validation-for="IntProp" class="text-danger"></span> // error message displayed here
</div>
</div>
</form>
// client side validation
@section Scripts{
@{ <partial name="_ValidationScriptsPartial" /> }
}
```
### Server Side Validation
```cs
using Microsoft.AspNetCore.Mvc;
using App.Models;
using System.Collections.Generic;
namespace App.Controllers
{
public class IndexModel : PageModel
{
private readonly ApplicationDbContext _db;
// get db context through dependency injection
public IndexModel(AppDbContext db)
{
_db = db;
}
[BindProperty]
public Entity Entity { get; set; }
public async Task OnGet(int id)
{
Entity = await _db.Entities.FindAsync(id);
}
public async Task<IActionResult> OnPost()
{
if (ModelState.IsValid)
{
await _db.SaveChangesAsync();
return RedirectToPage("Index");
}
else
{
return Page();
}
}
}
}
```

View file

@ -0,0 +1,164 @@
# [Razor Syntax](https://docs.microsoft.com/en-us/aspnet/core/mvc/views/razor)
## Markup
```cs
@page // set this as razor page
@model <App>.Models.Entity // if MVC set type of elements passed to the view
@model <Page>Model // if Razor page set underlying class
@* razor comment *@
// substitute @variable with it's value
<tag>@variable</tag>
@{
// razor code block
// can contain C# or HTML
Model // access to passed @model (MVC)
}
@if (condition) { }
@for (init, condition, iteration) { }
@Model.Property // display Property value (MVC)
```
---
## Tag Helpers (ASP.NET Core)
**Tag helpers** are reusable components for automating the generation of HTML in Razor Pages. Tag helpers target specific HTML tags.
Example:
```html
<!-- tag helpers for a lin in ASP.NET MVC -->
<a class="nav-link text-dark" asp-area="" asp-controller="Home" asp-action="Index">Home</a>
```
### Managing Tag Helpers
The `@addTagHelper` directive makes Tag Helpers available to the view. Generally, the view file is `Pages/_ViewImports.cshtml`, which by default is inherited by all files in the `Pages` folder and subfolders, making Tag Helpers available.
```cs
@using <App>
@namespace <App>.Pages // or <Project>.Models
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
```
The first parameter after `@addTagHelper` specifies the Tag Helpers to load (`*` for all Tag Helpers), and the second parameter (e.g. `Microsoft.AspNetCore.Mvc.TagHelpers`) specifies the assembly containing the Tag Helpers.
`Microsoft.AspNetCore.Mvc.TagHelpers` is the assembly for the built-in ASP.NET Core Tag Helpers.
#### Opting out of individual elements
It's possible to disable a Tag Helper at the element level with the Tag Helper opt-out character (`!`)
```cshtml
<!-- disable email validation -->
<!span asp-validation-for="Email" ></!span>
```
### Explicit Tag Helpers
The `@tagHelperPrefix` directive allows to specify a tag prefix string to enable Tag Helper support and to make Tag Helper usage explicit.
```cshtml
@tagHelpersPrefix th:
```
### Important Tag Helpers (`asp-`) & HTML Helpers (`@Html`)
[Understanding Html Helpers](https://stephenwalther.com/archive/2009/03/03/chapter-6-understanding-html-helpers)
```cs
@model <App>.Models.Entity
// Display the name of the property
@Html.DisplayNameFor(model => model.EntityProp)
@nameof(Model.EntityProp)
// Display the value of the property
@Html.DisplayFor(model => model.EntityProp)
@Model.EntityProp
<from>
// use the property as the label, eventually w/ [DisplayName("...")]
<label asp-for="EntityProp"></label>
@Html.LabelFor()
// automatically set the value at form compilation and submission
<input asp-for="EntityProp"/>
@Html.EditorFor()
</from>
// route config is {Controller}/{Action}/{Id?}
<a asp-controller="<Controller>" asp-action="<Action>">Link</a> // link to /Controller/Action
<a asp-controller="<Controller>" asp-action="<Action>" asp-route-Id="@model.Id">Link</a> // link to /Controller/Action/Id
@Html.ActionLink("<Link Text>", "<Action>", "<Controller>", new { @HTmlAttribute = value, Id = value }) // link to /Controller/Action/Id
// link to /Controller/Action?queryParameter=value
@Html.ActionLink("<Link Text>", "<Action>", "<Controller>", new { @HTmlAttribute = value, queryParameter = value })
<a asp-controller="<Controller>" asp-action="<Action>" asp-route-queryParameter="value">Link</a> // asp-route-* for query strings
```
### [Select Tag Helper](https://docs.microsoft.com/en-us/aspnet/core/mvc/views/working-with-forms)
[StackOverflow](https://stackoverflow.com/a/34624217)
[SelectList Docs](https://docs.microsoft.com/en-us/dotnet/api/system.web.mvc.selectlist)
In `ViewModel.cs`:
```cs
class ViewModel
{
public int EntityId { get; set; } // value selected in form ends up here
// object has numeric id and other props
public SelectList Entities = new()
public ViewModel(){ } // parameterless constructor (NEEDED)
}
```
In `View.cs`
```cs
@model ViewModel
<form asp-controller="Controller" asp-action="PostAction">
<select asp-for"EntityId" asp-items="Model.Entities">
</select>
<button type="submit">Send<button>
</form>
```
In `Controller.cs`:
```cs
public IActionResult GetAction()
{
var vm = new ViewModel();
vm.Entities = new SelectList(_context.Entities, "Id", "Text"); // fill SelectList
vm.EntityId = value; // set selected option (OPTIONAL)
return View(vm);
}
[HttpPost]
public IActionResult PostAction(ViewModel)
{
if(ModelState.IsValid)
{
// extract info from view model
// save to db
}
}
```

View file

@ -0,0 +1,53 @@
# ASP.NET REST API
```cs
[Route("api/endpoint")]
[ApiController]
public class EntitiesController : ControllerBase // API controller
{
private readonly IEntityService _service;
public EntitiesController(IEntityService service, IMapper mapper)
{
_service = service;
_mapper = mapper
}
[HttpGet] // GET api/endpoint
public ActionResult<IEnumerable<EntityDTO>> GetEntities()
{
IEnumerable<EntityDTO> results = /* ... */
return Ok(results);
}
[HttpGet("{id}")] // GET api/endpoint/{id}
public ActionResult<EntityDTO> GetEntityById(int id)
{
var result = /* .. */;
if(result != null)
{
return Ok(result);
}
return NotFound();
}
[HttpPost] // POST api/endpoint
public ActionResult<EntityDTO> CreateEntity([FromBody] EntityDTO entity)
{
// persist the entity
var id = /* ID of the created entity */
return Created(id, entity);
}
[HttpPut] // PUT api/endpoint
public ActionResult<EntityDTO> UpdateEntity([FromBody] EntityDTO entity)
{
// persist the updated entity
return Created(uri, entity);
}
}
```

View file

@ -0,0 +1,201 @@
# SignalR
The SignalR Hubs API enables to call methods on connected clients from the server. In the server code, define methods that are called by client. In the client code, define methods that are called from the server. SignalR takes care of everything behind the scenes that makes real-time client-to-server and server-to-client communications possible.
## Server-Side
### Configuration
In `Startup.cs`:
```cs
namespace App
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the DI container.
public void ConfigureServices(IServiceCollection services)
{
services.AddSignalR();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseEndpoints(endpoints =>
{
endpoints.MapHub("/hub/endpoint");
});
}
}
}
```
### Creating Hubs
```cs
public class CustomHub : Hub
{
public task HubMethod(Type args)
{
// trigger function on all clients and pass args to it
return Clients.All.SendAsync("CLientMethod", args);
// trigger function on caller client and pass args to it
return Clients.Caller.SendAsync("CLientMethod", args);
// trigger function on clients of a group and pass args to it
return Clients.Group("GroupName").SendAsync("CLientMethod", args);
// other operations
}
}
```
### Strongly Typed Hubs
A drawback of using `SendAsync` is that it relies on a magic string to specify the client method to be called. This leaves code open to runtime errors if the method name is misspelled or missing from the client.
An alternative to using SendAsync is to strongly type the Hub with `Hub<T>`.
```cs
public interface IHubClient
{
// matches method to be called on the client
Task ClientMethod(Type args);
}
```
```cs
public class CustomHub : Hub<IHubClient>
{
public Task HubMethod(Type args)
{
return Clients.All.ClientMethod(args);
}
}
```
Using `Hub<T>` enables compile-time checking of the client methods. This prevents issues caused by using magic strings, since `Hub<T>` can only provide access to the methods defined in the interface.
Using a strongly typed `Hub<T>` disables the ability to use `SendAsync`. Any methods defined on the interface can still be defined as asynchronous. In fact, each of these methods should return a `Task`. Since it's an interface, don't use the `async` keyword.
### Handling Connection Events
The SignalR Hubs API provides the OnConnectedAsync and OnDisconnectedAsync virtual methods to manage and track connections. Override the OnConnectedAsync virtual method to perform actions when a client connects to the Hub, such as adding it to a group.
```cs
public override async Task OnConnectedAsync()
{
await Groups.AddToGroupAsync(Context.ConnectionId, "GroupName");
await base.OnConnectedAsync();
}
public override async Task OnDisconnectedAsync(Exception exception)
{
await Groups.RemoveFromGroupAsync(Context.ConnectionId, "GroupName");
await base.OnDisconnectedAsync(exception);
}
```
Override the `OnDisconnectedAsync` virtual method to perform actions when a client disconnects.
If the client disconnects intentionally (by calling `connection.stop()`, for example), the exception parameter will be null.
However, if the client is disconnected due to an error (such as a network failure), the exception parameter will contain an exception describing the failure.
### Sending Errors to the client
Exceptions thrown in the hub methods are sent to the client that invoked the method. On the JavaScript client, the `invoke` method returns a JavaScript Promise. When the client receives an error with a handler attached to the promise using catch, it's invoked and passed as a JavaScript `Error` object.
If the Hub throws an exception, connections aren't closed. By default, SignalR returns a generic error message to the client.
If you have an exceptional condition you *do* want to propagate to the client, use the `HubException` class. If you throw a `HubException` from your hub method, SignalR will send the entire message to the client, unmodified.
```cs
public Task ThrowException()
{
throw new HubException("This error will be sent to the client!");
}
```
### Client-Side (JavaScript)
### Installing the client package
```sh
npm init -y
npm install @microsoft/signalr
```
npm installs the package contents in the `node_modules\@microsoft\signalr\dist\browser` folder. Create a new folder named signalr under the `wwwroot\lib` folder. Copy the signalr.js file to the `wwwroot\lib\signalr` folder.
Reference the SignalR JavaScript client in the `<script>` element. For example:
```html
<script src="~/lib/signalr/signalr.js"></script>
```
### Connecting to a Hub
[Reconnect Clients Docs](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client#reconnect-clients)
```js
const connection = new signalR.HubConnectionBuilder()
.withUrl("/hub/endpoint")
.configureLogging(signalR.LogLevel.Information)
.withAutomaticReconnect() // optional
.build();
// async/await connection start
async function connect() {
try {
await connection.start();
console.log("SignalR Connected.");
} catch (err) {
console.error(err);
}
};
// promise connection start
function connect() {
connection.start()
.then(() => {})
.catch((err) => {console.error(err)});
}
```
### Call hub methods fom the client
JavaScript clients call public methods on hubs via the `invoke` method of the `HubConnection`. The `invoke` method accepts:
- The name of the hub method.
- Any arguments defined in the hub method.
```js
try {
await connection.invoke("HubMethod", args);
} catch (err) {
console.error(err);
}
```
The `invoke` method returns a JavaScript `Promise`. The `Promise` is resolved with the return value (if any) when the method on the server returns. If the method on the server throws an error, the `Promise` is rejected with the error message. Use `async` and `await` or the `Promise`'s then and catch methods to handle these cases.
JavaScript clients can also call public methods on hubs via the the `send` method of the `HubConnection`. Unlike the `invoke` method, the send method doesn't wait for a response from the server. The send method returns a JavaScript `Promise`. The `Promise` is resolved when the message has been sent to the server. If there is an error sending the message, the `Promise` is rejected with the error message. Use `async` and `await` or the `Promise`'s then and catch methods to handle these cases.
### Call client methods from the hub
To receive messages from the hub, define a method using the `on` method of the `HubConnection`. The `on` method accepts:
- The name of the JavaScript client method.
- Arguments the hub passes to the method.
```cs
connection.on("ClientMethod", (args) => { /* ... */});
```

View file

@ -0,0 +1,85 @@
# WebForms
## `Page.aspx`
The fist loaded page is `Default.aspx` and its underlying code.
```html
<!-- directive -->
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="Project.Default" %>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"> <!-- XML Namespace -->
<head runat="server"> <!-- runat: handle as ASP code -->
<title></title>
</head>
<body>
<!-- web forms require a form tag to be the whole body -->
<form id="form1" runat="server"> <!-- runat: handle as ASP code -->
<div>
</div>
</form>
</body>
</html>
```
### Page Directive
```cs
<%@ Page Language="C#" // define language used (can be C# or VB)
AutoEventWireup="true" // automatically create and setup event handlers
CodeBehind="Default.aspx.cs" // define the underlying code file
Inherits="EmptyWebForm.Default" %>
```
### Web Controls
```xml
<asp:Control ID="" runat="server" ...></asp:Control>
<!-- Label: empty text will diplay ID, use empty space as text for empty label -->
<asp:Label ID="lbl_" runat="server" Text=" "></asp:Label>
<!-- TextBox -->
<asp:TextBox ID="txt_" runat="server"></asp:TextBox>
<!-- Button -->
<asp:Button ID="btn_" runat="server" Text="ButtonText" OnClick="btn_Click" />
<!-- HyperLink -->
<asp:HyperLink ID="lnk_" runat="server" NavigateUrl="~/Page.aspx">LINK TEXT</asp:HyperLink>
<!-- LinkButton: POstBackEvent reloads the page -->
<asp:LinkButton ID="lbtHome" runat="server" PostBackUrl="~/Page.aspx" OnClick="lbt_Click">BUTTON TEXT</asp:LinkButton>
<!-- Image -->
<asp:Image ID="img_" runat="server" ImageUrl="~/Images/image.png"/>
<!-- ImageButton -->
<asp:ImageButton ID="imb_" runat="server" ImageUrl="~/Images/image.png" PostBackUrl="~/Page.aspx"/>
<!-- SqlSataSource; connection string specified in Web.config -->
<asp:SqlDataSource ID="sds_" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand="SQL Query"></asp:SqlDataSource>
```
## `Page.aspx.cs`
```cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace Project
{
public partial class Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void Control_Event(object sender, EventArgs e)
{
// actions on event trigger
}
}
}
```

View file

@ -0,0 +1,113 @@
# [ADO.NET](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/ "ADO.NET Docs")
`ADO.NET` is a set of classes that expose data access services for .NET.
The `ADO.NET` classes are found in `System.Data.dll`, and are integrated with the XML classes found in `System.Xml.dll`.
[ADO.NET provider for SQLite](https://system.data.sqlite.org/index.html/doc/trunk/www/index.wiki "System.Data.SQLite")
## [Connection Strings](https://www.connectionstrings.com)
### [SQL Server 2019](https://www.connectionstrings.com/sql-server-2019/)
- Standard Security:
- `Server=<server_name>; Database=<database>; UID=<user>; Pwd=<password>;`
- `Server=<server_name>; Database=<database>; User ID=<user>; Password=<password>;`
- `Data Source=<server_name>; Initial Catalog=<database>; UID=<user>; Pwd=<password>;`
- Specific Instance: `Server=<server_name>\<instance_name>; Database=<database>; User ID=<user>; Password=<password>;`
- Trusted Connection (WinAuth): `Server=<server_name>; Database=<database>; Trusted_Connection=True;`
- MARS: `Server=<server_name>; Database=<database>; Trusted_Connection=True; MultipleActiveResultSets=True;`
**NOTE**: *Multiple Active Result Sets* (MARS) is a feature that works with SQL Server to allow the execution of multiple batches on a single connection.
### [SQLite](https://www.connectionstrings.com/sqlite/)
- Basic: `Data Source: path\to\db.sqlite3; Version=3;`
- In-Memory Database: `Data Source=:memory:; Version=3; New=True`
- With Password: `Data Source: path\to\db.sqlite3; Version=3; Password=<password>`
## Connection to DB
```cs
using System;
using System.Data.SqlClient; // ADO.NET Provider, installed through NuGet
namespace <namespace>
{
class Program
{
static void Main(string[] args)
{
// Connection to SQL Server DBMS
SqlConnectionStringBuilder connectionString = new SqlConnectionStringBuilder();
connectionString.DataSource = "<server_name>";
connectionString.UserID = "<user>";
connectionString.Password = "<password>";
connectionString.InitialCatalog = "<database>";
// more compact
SqlConnectionStringBuilder connectionString = new SqlConnectionStringBuilder("Server=<server_name>;Database=<database>;UID=<user>;Pwd=<password>")
}
}
}
```
## DB Interrogation
### `SqlConnection`
```cs
using (SqlConnection connection = new SqlConnection())
{
connection.ConnectionString = connectionString.ConnectionString;
connection.Open(); // start communication w/ sql server
}
// more compact
using (SqlConnection connection = new SqlConnection(connectionString)) {
connection.Open()
}
```
### [SqlCommand](https://docs.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand)
```cs
string sql = "<sql_instruction>";
using (SqlCommand command = new SqlCommand())
{
command.Connection = connection; // SqlConnection
command.CommandText = "... @Parameter"; // or name of StoredProcedure
// add parameters to the SqlParameterCollection, WARNING: table names or columns cannot be parameters
command.Parameters.Add("@Parameter", SqlDbType.<DBType>, columnLength).Value = value;
command.Parameters.AddWithValue("@Parameter", value);
command.Parameters.AddWithValue("@Parameter", (object) value ?? DBNull.Value); // if Parameter is nullable
// Create an instance of a SqlParameter object.
command.CreateParameter();
command.CommandType = CommandType.Text; // or StoredProcedure
int affectedRows = command.ExecuteNonQuery(); // execute the query and return the number of affected rows
}
```
### `SqlDataReader`
```cs
using (SqlDataReader cursor = command.ExecuteReader()) // object to get data from db
{
while (cursor.Read()) // get data till possible
{
// preferred methodology
cursor["<column_name>"].ToString(); // retrieve data form the column
cursor[<column_index>].ToString(); // retrieve data form the column
// check for null before retrieving the value
if(!cursor.IsDBNull(n))
{
cursor.Get<SystemType>(index); // retrieve data form the n-th column
}
}
}
```

View file

@ -0,0 +1,125 @@
# Entity Framework
## Model & Data Annotations
```cs
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
namespace <Project>.Model
{
public class Entity
{
[Key] // set as PK (Id & EntityId are automatically detected to be PKs)
public int Id { get; set; }
[Required]
public Type ForeignObject { get; set; } // Not Null in DB
public Type ForeignObject { get; set; } // Allow Null in DB
public int Prop { get; set; } // Not Null in DB (primitive are not nullable)
public int? Prop { get; set; } // Allow Null in DB
}
}
```
## Context
NuGet Packages to install:
- `Microsoft.EntityFrameworkCore`
- `Microsoft.EntityFrameworkCore.Tools` to use migrations in Visual Studio
- `Microsoft.EntityFrameworkCore.Tools.DotNet` to use migrations in `dotnet` cli (`dotnet-ef`)
- `Microsoft.EntityFrameworkCore.Design` *or* `Microsoft.EntityFrameworkCore.<db_provider>.Design` needed for tools to work (bundled w\ tools)
- `Microsoft.EntityFrameworkCore.<db_provider>`
```cs
using Microsoft.EntityFrameworkCore;
namespace <Project>.Model
{
class Context : DbContext
{
private const string _connectionString = "Server=<server_name>;Database=<database>;UID=<user>;Pwd=<password>";
// connect to db
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer(_connectionString); // specify connection
}
// or
public Context(DbContextOptions options) : base(options)
{
}
//DBSet<TEntity> represents the collection of all entities in the context (or that can be queried from the database) of a given type
public DbSet<Entity> Entities { get; set; }
public DbSet<Entity> Entities => Set<Entity>(); // with nullable reference types
}
}
```
## Migrations
Create & Update DB Schema if necessary.
In Package Manager Shell:
```ps1
PM> Add-Migration <migration_name>
PM> update-database [-Verbose] # use the migrations to modify the db, -Verbose to show SQL queries
```
In dotnet cli:
```ps1
dotnet tool install --global dotnet-ef # if not already installed
dotnet ef migrations add <migration_name>
dotnet ef database update
```
## CRUD
### Create
```cs
context.Add(entity);
context.AddRange(entities);
context.SaveChanges();
```
### Read
[Referenced Object Not Loading Fix](https://stackoverflow.com/a/5385288)
```cs
context.Entities.ToList();
context.Entities.Find(id);
// force read of foreign key identifying referenced obj
context.Entities.Include(c => c.ForeignObject).Find(id);
```
### Update
```cs
context.Entities.Update(entity);
context.UpdateRange(entities);
context.SaveChanges();
```
### Delete
```cs
context.Entities.Remove(entity);
context.RemoveRange(entities);
context.SaveChanges();
```

267
docs/git/git.md Normal file
View file

@ -0,0 +1,267 @@
# Git
## Glossary
**GIT**: an open source, distributed version-control system
**GITHUB**: a platform for hosting and collaborating on Git repositories
**TREE**: directory that maps names to blobs or trees
**BLOB**: any file
**COMMIT**: snapshot of the entire repository + metadata, identified it's SHA-1 hash
**HEAD**: represents the current working directory, the HEAD pointer can be moved to different branches, tags, or commits when using git checkout
**BRANCH**: a lightweight movable pointer to a commit
**CLONE**: a local version of a repository, including all commits and branches
**REMOTE**: a common repository on GitHub that all team member use to exchange their changes
**FORK**: a copy of a repository on GitHub owned by a different user
**PULL REQUEST**: a place to compare and discuss the differences introduced on a branch with reviews, comments, integrated tests, and more
**REPOSITORY**: collection of files and folder of a project aka repo
**STAGING AREA/STASH**: area of temporary snapshots (not yet commits)
## Data Model
Data Model Structure:
```txt
<root> (tree)
|
|_ foo (tree)
| |_ bar.txt (blob, contents = "hello world")
|
|_ baz.txt (blob, contents = "git is wonderful")
```
Data Model as pseudocode:
```py
# a file is a bunch of bytes
blob = array<byte>
# a directory contains named files and directories
tree = map<string, tree | file>
# a commit has parents, metadata, and the top-level tree
commit = struct {
parent: array<commit>
author: string
message: string
snapshot: tree
}
# an object is either a blob, tree or commit
object = map<string, blob | tree | commit>
# commit identified by it's hash (unmutable)
def store(object):
id = sha1(object) # hash repo
objects<id> = object # store repo w/ index hash
# load the commit
def load(id):
return objects<id>
# human-readable names for SHA-1 hashes (mutable)
references = map<string, string>
# bind a reference to a hash
def update_reference(name, id):
references<name> = id
def read_reference(name):
return references<name>
def load_reference(name_or_id):
if name_or_id in references:
return load(references<name_or_id>)
else:
return load(name_or_id)
```
## Commands
`git help <command>`: get help for a git command
### Create Repository
`git init [<project_name>]`: initialize a brand new Git repository and begins tracking
`.gitignore`: specify intentionally untracked files to ignore
### Config
`git config --global user.name "<name>"`: set name attached to commits
`git config --global user.email "<email address>"`: set email attached to commits
`git config --global color.ui auto`: enable colorization of command line output
### Making Changes
`git status`: shows the status of changes as untracked, modified, or staged
`git add <filename1 filename2 ...>`: add files to the staging area
`git add -p <files>`: interactively stage chunks of a file
`git blame <file>`: show who last edited which line
`git commit`: save the snapshot to the project history
`git commit -m "message"`: commit and provide a message
`git commit -a`: automatically notice any modified (but not new) files and commit
`git commit -v|--verbose`: show unified diff between the HEAD commit and what would be committed
`git diff <filename>`: show difference since the last commit
`git diff <commit> <filename>`: show differences in a file since a particular snapshot
`git diff <reference_1> <reference_2> <filename>`: show differences in a file between two snapshots
`git diff --cached`: show what is about to be committed
`git diff <first-branch>...<second-branch>`: show content diff between two branches
`git bisect`: binary search history (e.g. for regressions)
### Stashes
`git stash [push -m|--message]`: add all changes to the stash (and provide message)
`git stash list` list all stashes
`git stash show [<stash>]`: show changes in the stash
`git stash pop`: restore last stash
`git stash drop [<stash>]`: remove a stash from the list
`git stash clear`: remove all stashes
### Remotes
`git remote`: list remotes
`git remote -v`: list remotes names and URLs
`git remote show <remote>`: inspect the remote
`git remote add <remote> <url | path>`: add a remote
`git branch --set-upstream-to=<remote>/<remote branch>`: set up correspondence between local and remote branch
`git push <remote> <branch>`: send objects to remote
`git push <remote> <local branch>:<remote branch>`: send objects to remote, and update remote reference
`git fetch [<remote>]`: retrieve objects/references from a remote
`git pull`: update the local branch with updates from its remote counterpart, same as `git fetch; git merge`
`git pull -ff`: when possible resolve the merge as a fast-forward (only update branch pointer, don't create merge commit). Otherwise create a merge commit.
`git fetch && git show <remote>/<branch>`: show incoming changes
`git clone <url> [<folder_name>]`: download repository and repo history from remote
`git clone --shallow`: clone only repo files, not history of commits
`git remote remove <remote>`: remove the specified remote
`git remote rename <old_name> <new_name>`: rename a remote
### Viewing Project History
`git log`: show history of changes
`git log -p`: show history of changes and complete differences
`git log --stat --summary`: show overview of the change
`git log --follow <file>`: list version history fo file, including renames
`git log --all --graph --decorate`: visualizes history as a DAG
`git log --oneline`: compact log
`git shortlog`: list commits by author
`git show <commit>`: output metadata and content changes of commit
`git cat-file -p <commit>`: output commit metadata
### Tag
Git supports two types of tags: *lightweight* and *annotated*.
A lightweight tag is very much like a branch that doesn't change—it's just a pointer to a specific commit.
Annotated tags, however, are stored as full objects in the Git database.
They're checksummed;contain the tagger name, email, and date; have a tagging message; and can be signed and verified with GNU Privacy Guard (GPG).
It's generally recommended creating annotated tags so it's possible to have all this information.
`git tag`: list existing tags
`git tag -l|--list <pattern>`: list existing tags matching a wildcard or pattern
`git tag <tag> [<commit_hash>]`: create a *lightweight* tag on the commit
`git tag -a <tag> [<commit_hash> -m <message>]`: create am *annotated* tag on the commit
`git push <remote> <tagname>`: push a tag to the remote
`git push <remote> --tags`: push commits and their tags (both types) to the remote
`git tag -d <tagname>`: delete a tag
`git push <remote> :refs/tags<tagname>:`: remove a tag from the remote
`git push <remote> --delete <tagname>`: remove a tag from the remote
`git checkout <tag>`: checkout a tag - **WARNING**: will go into *detached HEAD*
### Branching And Merging
`git branch`: shows branches
`git branch -v`: show branch + last commit
`git branch <branch-name>`: create new branch
`git checkout -b <branch-name>`: create a branch and switches to it, same as `git branch <name>; git checkout <name>`
`git branch`: show list of all existing branches (* indicates current)
`git checkout <branch-name>`: change current branch (update HEAD) and update working directory
`git branch -d <branch-name>`: delete specified branch
`git branch -m <old_name> <new_name>`: rename a branch without affecting the branch's history
`git merge <branch-name>`: merges into current branch
`git merge --continue`: continue previous merge after solving a merge conflict
`git mergetool`: use a fancy tool to help resolve merge conflicts
`git rebase`: rebase set of patches onto a new base
`git rebase -i`: interactive rebasing
`git cherry-pick <commit>`: bring in a commit from another branch
`git cherry-pick <commit>^..<commit>`: bring in a range of commits from another branch (first included)
`git cherry-pick <commit>..<commit>`: bring in a range of commits from another branch (first excluded)
### Undo & [Rewriting History](https://www.themoderncoder.com/rewriting-git-history/)
`git commit --amend`: replace last commit by creating a new one (can add files or rewrite commit message)
`git commit --amend -m "amended message"`: replace last commit by creating a new one (can add files or rewrite commit message)
`git commit --amend --no-edit`: replace last commit by creating a new one (can add files or rewrite commit message)
`git reset HEAD <file>`: unstage a file
`git reset <commit>`: undo all commits after specified commit, preserving changes locally
`git checkout <file>`: discard changes
`git checkout -- <file>`: discard changes, no output to screen
`git reset --soft <commit>`: revert to specific commit but keep changes and staged files
`git reset --hard <commit>`: discard all history and changes back to specified commit
`git rebase -i HEAD~<n>`: modify (reword, edit, drop, squash, merge, ...) *n* commits
`git rm --cached <file>`: remove a file from being tracked
**WARNING**: Changing history can have nasty side effects
---
## How To
### Rebase Branches
```ps1
git checkout <primary_branch>
git pull # get up to date
git checkout <feature_branch>
git rebase <primary_branch> # rebase commits on master (moves branch start point on last master commit)
git checkout <primary_branch>
git rebase <feature_branch> # moves commits from the branch on top of master
```
![branch](../img/git_branches.png "how branches work")
### Clone Branches
```ps1
git clone <repo> # clone the repo
git branch -r # show remote branches
git checkout <branch> # checkout remote branch (omit <remote>/)
git pull # clone branch
```
### [Sync Forks](https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)
```ps1
git fetch upstream # Fetch the branches and their respective commits from the upstream repository
git checkout main # checkout fork's main primary branch
git merge upstream/main # Merge the changes from the upstream default branch into the local default branch
git push # update fork on GitHub
```

384
docs/graph-ql.md Normal file
View file

@ -0,0 +1,384 @@
# GraphQL
[How to GraphQL - The Fullstack Tutorial for GraphQL](https://www.howtographql.com/)
GraphQL is a query language for APIa, and a server-side runtime for executing queries by using a type system for the data. GraphQL isn't tied to any specific database or storage engine and is instead backed by existing code and data.
A GraphQL service is created by defining types and fields on those types, then providing functions for each field on each type.
---
## Schema and Types
### Object types and fields
The most basic components of a GraphQL schema are object types, which just represent a kind of object fetchable from the service, and what fields it has.
```graphql
type Type {
field: Type
field: Type! # non-nullable type
field: [Type] # array of objects
field: [Type!]! # non-nullable array of non-nullable objects
}
```
### Field Arguments
Every field on a GraphQL object type can have zero or more arguments. All arguments are named.
Arguments can be either *required* or *optional*. When an argument is optional, it's possible to define a default value.
```graphql
type Type {
field: Type,
field(namedArg: Type = defaultValue): Type
}
```
### Query and Mutation types
Every GraphQL service has a `query` type and may or may not have a `mutation` type. These types are the same as a regular object type, but they are special because they define the *entry point* of every GraphQL query.
### Scalar Types
A GraphQL object type has a name and fields, but at some point those fields have to resolve to some concrete data.
That's where the scalar types come in: they represent the *leaves* of the query. Scalar types do not have sub-types and fields.
GraphQL comes with a set of default scalar types out of the box:
- `Int`: A signed 32bit integer.
- `Float`: A signed double-precision floating-point value.
- `String`: A UTF8 character sequence.
- `Boolean`: `true` or `false`.
- `ID`: The ID scalar type represents a unique identifier, often used to refetch an object or as the key for a cache. The ID type is serialized in the same way as a `String`; however, defining it as an `ID` signifies that it is not intended to be humanreadable.
In most GraphQL service implementations, there is also a way to specify custom scalar types.
```graphql
scalar ScalarType
```
Then it's up to the implementation to define how that type should be serialized, deserialized, and validated.
### Enumeration Types
Also called *Enums*, enumeration types are a special kind of scalar that is restricted to a particular set of allowed values.
This allows to:
1. Validate that any arguments of this type are one of the allowed values
2. Communicate through the type system that a field will always be one of a finite set of values
```graphql
enum Type{
VALUE,
VALUE,
...
}
```
**Note**: GraphQL service implementations in various languages will have their own language-specific way to deal with enums. In languages that support enums as a first-class citizen, the implementation might take advantage of that; in a language like JavaScript with no enum support, these values might be internally mapped to a set of integers. However, these details don't leak out to the client, which can operate entirely in terms of the string names of the enum values.
## Lists and Non-Null
Object types, scalars, and enums are the only kinds of types that can be defined in GraphQL.
But when used in other parts of the schema, or in the query variable declarations, it's possible apply additional *type modifiers* that affect **validation** of those values.
It's possible to mark a field as *Non-Null* by adding an exclamation mark, `!` after the type name. This means that the server always expects to return a non-null value for this field, and if it ends up getting a null value that will actually trigger a GraphQL execution error, letting the client know that something has gone wrong.
The *Non-Null* type modifier can also be used when defining arguments for a field, which will cause the GraphQL server to return a validation error if a null value is passed as that argument, whether in the GraphQL string or in the variables.
It's possible to use a type modifier to mark a type as a `List`, which indicates that this field will return an array of that type. In the schema language, this is denoted by wrapping the type in square brackets, `[` and `]`. It works the same for arguments, where the validation step will expect an array for that value.
### Interfaces
Like many type systems, GraphQL supports interfaces. An Interface is an abstract type that includes a certain set of fields that a type must include to implement the interface.
Interfaces are useful when returning an object or set of objects, but those might be of several different types.
```graphql
interface Interface {
fieldA: TypeA
fieldB: TypeB
}
type Type implements Interface {
fieldA: TypeA,
fieldB: TypeB
field: Type,
...
}
```
### Union Type
Interfaces are useful when returning an object or set of objects, but those might be of several different types.
```graphql
union Union = TypeA | TypeB | TypeC
```
**Note**: members of a union type need to be *concrete* object types; it's not possible to create a union type out of interfaces or other unions.
### Input Type
In the GraphQL schema language, input types look exactly the same as regular object types, but with the keyword input instead of type:
```graphql
input Input {
field: Type,
...
}
```
The fields on an input object type can themselves refer to input object types, but it's not possible to mix input and output types in the schema.
Input object types also can't have arguments on their fields.
---
## Queries, Mutations and Subscriptions
### Simple Query
```graphql
{
field { # root field
... # payload
}
}
```
```json
{
"data" : {
"field": {
...
}
}
}
```
### Query Arguments
In a system like REST, it's possible to only pass a single set of arguments - the query parameters and URL segments in your request.
But in GraphQL, every field and nested object can get its own set of arguments, making GraphQL a complete replacement for making multiple API fetches.
It's aldo possible to pass arguments into scalar fields, to implement data transformations once on the server, instead of on every client separately.
```graphql
{
fieldA(arg: value) # filter results
fieldB(arg: value)
...
}
```
### Aliases
```graphql
{
aliasA: field(arg: valueA) {
field
}
aliasB: field(arg: valueB) {
field
}
}
```
### Fragments
Fragments allow to construct sets of fields, and then include them in queries where that are needed.
The concept of fragments is frequently used to split complicated application data requirements into smaller chunks.
```graphql
{
aliasA: field(arg: valueA) {
...fragment
},
aliasB: field(arg: valueB) {
...fragment
}
}
# define a set of fields to be retrieved
fragment fragment on Type {
field
...
}
```
### Using variables inside fragments
It is possible for fragments to access variables declared in the query or mutation.
```graphql
query Query($var: Type = value) {
aliasA: field(arg: valueA) {
...fragment
}
aliasB: field(arg: valueB) {
...fragment
}
}
fragment fragment on Type{
field
field(arg: $var) {
field
...
}
}
}
```
### Operation Name
The *operation type* is either `query`, `mutation`, or `subscription` and describes what type of operation it's intended to be done. The operation type is required unless when using the *query shorthand syntax*, in which case it's not possible to supply a name or variable definitions for the operation.
The *operation name* is a meaningful and explicit name for the operation. It is only required in multi-operation documents, but its use is encouraged because it is very helpful for debugging and server-side logging. When something goes wrong it is easier to identify a query in the codebase by name instead of trying to decipher the contents.
```graphql
query Operation {
...
}
```
### Variables
When working with variables, three things need to be done:
1. Replace the static value in the query with `$variableName`
2. Declare `$variableName` as one of the variables accepted by the query
3. Pass `variableName: value` in the separate, transport-specific (usually JSON) variables dictionary
```graphql
query Operation($var: Type = defaultValue) {
field(arg: $var) {
field
...
}
}
```
All declared variables must be either *scalars*, *enums*, or *input* object types. So to pass a complex object into a field, the input type that matches on the server must be known.
Variable definitions can be *optional* or *required*. If the field requires a non-null argument, then the variable has to be required as well.
Default values can also be assigned to the variables in the query by adding the default value after the type declaration. When default values are provided for all variables, it's possible to call the query without passing any variables. If any variables are passed as part of the variables dictionary, they will override the defaults.
### Directives
A directive can be attached to a field or fragment inclusion, and can affect execution of the query in any way the server desires.
The core GraphQL specification includes exactly two directives, which must be supported by any spec-compliant GraphQL server implementation:
- `@include(if: Boolean)` Only include this field in the result if the argument is `true`.
- `@skip(if: Boolean)` Skip this field if the argument is `true`.
Server implementations may also add experimental features by defining completely new directives.
### Mutations
Operations of mutations:
- **Creating** new data
- **Updating** existing data
- **Deleting** existing data
```graphql
mutation Operation {
createObject(arg: value, ...) {
field
..
}
}
```
### Subscriptions
Open a stable connection with the server to receive real-time updates on the operations happening.
```graphql
subscription Operation {
event { # get notified when event happens
field # data received on notification
...
}
}
```
### Inline Fragments
If you are querying a field that returns an interface or a union type, you will need to use inline fragments to access data on the underlying concrete type. Named fragments can also be used in the same way, since a named fragment always has a type attached.
```graphql
query Operation($var: Type) {
field(arg: $var) { # interface of union
field
... on ConcreteTypeA {
fieldA
}
... on ConcreteTypeB{
fieldB
}
}
}
```
### Meta Fields
GraphQL allows to request `__typename`, a meta field, at any point in a query to get the name of the object type at that point.
```graphql
{
field(arg: value) {
__typename
... on Type {
field
}
}
}
```
---
## Execution
After being validated, a GraphQL query is executed by a GraphQL server which returns a result that mirrors the shape of the requested query, typically as JSON.
Each field on each type is backed by a function called the *resolver* which is provided by the GraphQL server developer. When a field is executed, the corresponding *resolver* is called to produce the next value.
If a field produces a scalar value like a string or number, then the execution completes. However if a field produces an object value then the query will contain another selection of fields which apply to that object. This continues until scalar values are reached. GraphQL queries always end at scalar values.
### Root fields and Resolvers
At the top level of every GraphQL server is a type that represents all of the possible entry points into the GraphQL API, it's often called the *Root* type or the *Query* type.
```graphql
# root types for entry-points
type Query {
rootField(arg: Type = defValue, ...): Type
... # other query entry points
}
type Mutation {
rootField(arg: Type = defValue, ...): Type
... # other mutation entry points
}
type Subscription {
rootField(arg: Type = defValue, ...): Type
... # other subscription entry points
}
```
A resolver function receives four arguments:
- `obj` The previous object, which for a field on the root Query type is often not used.
- `args` The arguments provided to the field in the GraphQL query.
- `context` A value which is provided to every resolver and holds important contextual information like the currently logged in user, or access to a database.
- `info` A value which holds field-specific information relevant to the current query as well as the schema details

606
docs/html/html.md Normal file
View file

@ -0,0 +1,606 @@
# HTML
## Terminology
**Web design**: The process of planning, structuring and creating a website.
**Web development**: The process of programming dynamic web applications.
**Front end**: The outwardly visible elements of a website or application.
**Back end**: The inner workings and functionality of a website or application.
## Anatomy of an HTML Element
**Element**: Building blocks of web pages, an individual component of HTML.
**Tag**: Opening tag marks the beginning of an element & closing tag marks the end.
Tags contain characters that indicate the tag's purpose content.
`<tagname> content </tagname>`
**Container Element**: An element that can contain other elements or content.
**Stand Alone Element**: An element that cannot contain anything else.
**Attribute**: Provides additional information about the HTML element. Placed inside an opening tag, before the right angle bracket.
**Value**: Value is the value assigned to a given attribute. Values must be contained inside quotation marks (“”).
## The Doctype
The first thing on an HTML page is the doctype, which tells the browser which version of the markup language the page is using.
### XHTML 1.0 Strict
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
```
### HTML4 Transitional
```html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
```
### HTML5
`<!doctype html>`
## The HTML Element
After `<doctype>`, the page content must be contained between tags.
```html
<!doctype html>
<html lang="en">
<!-- page contents -->
</html>
```
### The HEAD Element
The head contains the title of the page & meta information about the page. Meta information is not visible to the user, but has many purposes, like providing information to search engines.
UTF-8 is a character encoding capable of encoding all possible characters, or code points, defined by Unicode. The encoding is variable-length and uses 8-bit code units.
XHTML and HTML4: `<meta http-equiv="Content-Type" content="text/html; charset=utf-8"></meta>`
HTML5: `<meta charset="utf-8">`
### HTML Shiv (Polyfill)
Used to make old browsers understand newer HTML5 and newer components.
```html
<!--[if lt IE 9]>
<script src="https://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv.js"></script>
<![endif]-->
```
## The BODY Element
The body contains the actual content of the page. Everything that is contained in the body is visible to the user.
```html
<body>
<!-- page contents -->
</body>
```
## JavaScript
XHTML and older: `<script src="js/scripts.js" type="text/javascript"></script>`
HTML5: `<script src="js/scripts.js"></script>` (HTML5 spec states that `type` attribute is redundant and should be omitted)
The `<script>` tag is used to define a client-side script (JavaScript).
The `<script>` element either contains scripting statements, or it points to an external script file through the src attribute.
### Local, Remote or Inline JavaScript
**Remote**: `<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>`
**Local**: `<script src="js/main.js"></script>`
**Inline**: `<script> javascript-code-here </script>`
## Forms
Forms allow to collect data from the user:
* signing up and logging in to websites
* entering personal information
* filtering content (using dropdowns, checkboxes)
* performing a search
* uploading files
Forms contain elements called controls (text inputs, checkboxes, radio buttons, submit buttons).
When users complete a form the data is usually submitted to a web server for processing.
### [Form Validation](https://developer.mozilla.org/en-US/docs/Learn/Forms/Form_validation)
Validation is a mechanism to ensure the correctness of user input.
Uses of Validation:
* Make sure that all required information has been entered
* Limit the information to certain types (e.g. only numbers)
* Make sure that the information follows a standard (e.g. email, credit card number)
* Limit the information to a certain length
* Other validation required by the application or the back-end services
#### Front-End Validation
The application should validate all information to make sure that it is complete, free of errors and conforms to the specifications required by the back-end.
It should contain mechanisms to warn users if input is not complete or correct.
It should avoid to send "bad" data to the back-end.
### Back-End Validation
It should never trust that the front-end has done validation since some clever users can bypass the front-end mechanisms easily.
Back-end services can receive data from other services, not necessarily front-end, that don't perform validation.
#### Built-In Validation
Not all browsers validate in the same way and some follow the specs partially. Some browsers don't have validation at all (older desktop browsers, some mobile browsers).
Apart from declaring validation intention with HTML5 developers don't have much control over what the browser actually does.
Before using build-in validation make sure that it's supported by the target browsers.
#### Validation with JavaScript
* Gives the developer more control.
* The developer can make sure it works on all target browsers.
* Requires a lot of custom coding, or using a library (common practice).
---
## General structure of HTML page
```html
<!-- HTML Boilerplate -->
<!DOCTYPE html>
<html lang="en">
<head>
<!-- meta tag -->
<meta charset="utf-8">
<title></title>
<meta name="description" content="">
<meta name="author" content="">
<!-- adapt page dimensions to device -->
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!-- external style sheet here -->
<link rel="stylesheet" href="path/style-sheet.css">
<!-- script if necessary -->
<script src="_.js" type="text/javascript"></script>
<!-- script is executed only after the page finishes loading-->
<script src="_.js" defer type="text/javascript"></script>
</head>
<body>
<!-- end of body -->
<script src="_.js" type="text/javascript"></script>
</body>
</html>
```
Attributes describe additional characteristics of an HTML element.
`<tagname attribute="value"> content </tagname>`
### Meta Tag Structure
`<meta name="info-name" content="info-content">`
### Paragraph
Paragraphs allow to format the content in a readable fashion.
```html
<p> paragraph-content </p>
<p> paragraph-content </p>
```
### Headings
Heading numbers indicates hierarchy, not size.
```html
<h1> Heading 1 </h1>
<h2> Heading 2 </h2>
```
### Formatted Text
With semantic value:
* Emphasized text (default cursive): `<em></em>`
* Important text (default bold): `<strong></strong>`
Without semantic value, used as last resort:
* Italic text: `<i></i>`
* Bold text: `<b></b>`
## Elements
`<br/>`: Line break (carriage return). It's not good practice to put line breaks inside paragraphs.
`<hr>`: horizontal rule (line). Used to define a thematic change in the content.
### Links/Anchor
Surround content to turn into links.
```html
<!-- Link to absolute URL -->
<a href="uri/url" title="content-title" target="_self"> text/image </a>
<!-- links to relative URL -->
<a href="//example.com">Scheme-relative URL</a>
<a href="/en-US/docs/Web/HTML">Origin-relative URL</a>
<a href="./file">Directory-relative URL</a>
<!-- Link to element on the same page -->
<a href="#element-id"></a>
<!-- Link to top of page -->
<a href="#top"> Back to Top </a>
<!-- link to email -->
<a href="mailto:address@domain">address@domain</a>
<!-- link to telephone number -->
<a href="tel:+39(111)2223334">+39 111 2223334</a>
<!-- download link -->
<a href="./folder/filename" download>Download</a>
```
`target`:
* `_blank`: opens linked document in a new window or *tab*
* `_self`: opens linked document in the same frame as it was clicked (DEFAULT)
* `_parent`: opens the linked document in the parent frame
* `_top`: opens linked document in the full body of the window
* `frame-name`: opens the linked document in the specified frame
### Images
```html
<img src="image-location" alt="brief-description"/> <!-- image element -->
<!-- alt should be always be populated for accessibility and SEO purposes -->
```
```html
<!-- supported by modern browsers -->
<figure>
<img src="img-location" alt="description">
<figcaption> caption of the figure </figcaption>
</figure>
```
### Unordered list (bullet list)
```html
<ul>
<li></li> <!-- list element -->
<li></li>
</ul>
```
### Ordered list (numbered list)
```html
<ol>
<li></li>
<li></li>
</ol>
```
### Description list (list of terms and descriptions)
```html
<dl>
<dt>term</dt> <!-- define term/name -->
<dd>definition</dd> <!-- describe term/name -->
<dt>term</dt>
<dd>definition</dd>
</dl>
```
### Tables
```html
<table>
<thead> <!-- table head row -->
<th></th> <!-- table head, one for each column-->
<th></th>
</thead>
<tbody> <!-- table content (body) -->
<tr> <!-- table row -->
<td></td> <!-- table cell -->
<td></td>
</tr>
</tbody>
</table>
```
### Character Codes
Code | Character
---------|-----------------
`&copy;` | Copyright
`&lt;` | less than (`<`)
`&gt;` | greater than (`>`)
`&amp;` | ampersand (`&`)
### Block Element
Used to group elements together into sections, eventually to style them differently.
```html
<div>
<!-- content here -->
</div>
```
### Inline Element
Used to apply a specific style inline.
```html
<p> non-styled content <span class="..."> styled content </span> non-styled content </p>
```
### HTML5 new tags
```html
<header></header>
<nav></nav>
<main></main>
<section></section>
<article></article>
<aside></aside>
<footer></footer>
```
## HTML Forms
```html
<form action="data-receiver" target="" method="http-method">
<!-- ALL form elements go here -->
</form>
```
Target:
* `_blank`: submitted result will open in a new browser tab
* `_self`: submitted result will open in the same page (*default*)
Method:
* `get`: data sent via get method is visible in the browser's address bar
* `post`: data sent via post in not visible to the user
PROs & CONs of `GET` method:
* Data sent by the GET method is displayed in the URL
* It is possible to bookmark the page with specific query string values
* Not suitable for passing sensitive information such as the username and password
* The length of the URL is limited
PROs & CONs of `POST` method:
* More secure than GET; information is never visible in the URL query string or in the server logs
* Has a much larger limit on the amount of data that can be sent
* Can send text data as well as binary data (uploading a file)
* Not possible to bookmark the page with the query
### Basic Form Elements
```html
<form action="" method="">
<label for="target-identifier">label-here</label>
<input type="input-type" name="input-name" value="value-sent" id="target-identifier">
</form>
```
Input Attributes:
* `name`: assigns a name to the form control (used by JavaScript and queries)
* `value`: value to be sent to the server when the option is selected
* `id`: identifier for CSS and linking tags
* `checked`: initially selected or not (radiobutton, checkboxes, ...)
* `selected`: default selection of a dropdown
### Text Field
One line areas that allow the user to input text.
The `<label>` tag is used to define the labels for `<input>` elements.
```html
<form>
<label for="identifier">Label:</label>
<input type="text" name="label-name" id="identifier" placeholder="placeholder-text">
</form>
<!-- SAME AS -->
<form>
<label>Label:
<input type="text" name="label-name" id="identifier" placeholder="placeholder-text">
</label>
</form>
```
Text inputs can display a placeholder text that will disappear as soon as some text is entered
### Password Field
```html
<form>
<label for="identifier">Password:</label>
<input type="password" name="user-password" id="identifier">
</form>
```
### Radio Buttons
```html
<form action="..." method="post" target="_blank">
<label for="identifier">Radiobutton-Text</label>
<input type="radio" name="option-name" id="identifier" value="option-value">
<label for="identifier">Radiobutton-Text</label>
<input type="radio" name="option-name" id="identifier" value="option-value" checked="checked">
<button type="submit">Button-Action</button>
</form>
```
### Checkboxes
```html
<form>
<label for="identifier">Option-Name</label>
<input type="checkbox" name="" id="identifier">
<label for="identifier">Option-Name</label>
<input type="checkbox" name="" id="identifier">
<label for="identifier">Option-Name</label>
<input type="checkbox" name="" id="identifier" checked="checked">
</form>
```
### Dropdown Menus
```html
<form>
<label for="identifier">Label:</label>
<select name="" id="identifier" multiple>
<option value="value">Value</option>
<option value="value">Value</option>
<option value="value" selected>Value</option>
</select>
</form>
```
Use `<select>` rather than radio buttons when the number of options to choose from is large
`selected` is used rather than checked.
Provides the ability to select multiple options.
Conceptually, `<select>` becomes more similar to checkboxes.
### FILE Select
Upload a local file as an attachment
```html
<form>
<label for="file-select">Upload:</label>
<input type="file" name="upload" value="file-select">
</form>
```
### Text Area
Multi line text input.
```html
<form>
<label for="identifier">Label:</label>
<textarea name="label" rows="row-number" cols="column-number" id="identifier">pre-existing editable test</textarea>
<!-- rows and columns should be defined in a CSS -->
</form>
```
### Submit & Reset
```html
<form action="" method="POST">
<input type="submit" value="">
<input type="reset" value="">
<!-- OR -->
<button type="submit" value="">
<button type="reset" value="">
</form>
```
`submit`: sends the form data to the location specified in the action attribute.
`reset`: resets all forms controls to the default values.
### Button
```html
<button type="button/reset/submit" value=""/>
<!-- can contain HTML -->
<button type="button/reset/submit" value=""></button>
```
### Fieldset
Group controls into categories. Particularly important for screen readers.
```html
<fieldset>
<legend>Color</legend>
<input type="radio" name="colour" value="red" id="colour_red">
<label for="colour_red">Red</label>
<input type="radio" name="colour" value="green" id="colour_green">
<label for="colour_green">Green</label>
<input type="radio" name="colour" value="blue" id="colour_blue">
<label for="colour_blue">Red</label>
</fieldset>
```
## HTML5 Input Types
Newer input types are useful for:
* validation
* restricting user input
* Using custom dialogs
Downsides:
* most are not supported by older browsers, especially Internet Explorer.
* each browser has a different implementation so the user experience is not consistent.
### Email Field
Used to receive a valid e-mail address from the user. Most browsers can validate this without needing javascript.
Older browsers don't support this input type.
```html
<form>
<label for="user-email">Email:</label>
<input type="email" name="user-email" id="form-email">
<button type="submit">Send</button>
</form>
```
### More Input Types
```html
<input type="email" id="email" name="email">
<input type="url" id="url" name="url">
<input type="number" name="" id="identifier" min="min-value" max="max-value" step="">
<input type="search" id="identifier" name="">
```
### [Using Built-In Form Validation](https://developer.mozilla.org/en-US/docs/Learn/Forms/Form_validation)
One of the most significant features of HTML5 form controls is the ability to validate most user data without relying on JavaScript.
This is done by using validation attributes on form elements.
* `required`: Specifies whether a form field needs to be filled in before the form can be submitted.
* `minlength`, `maxlength`: Specifies the minimum and maximum length of textual data (strings)
* `min`, `max`: Specifies the minimum and maximum values of numerical input types
* `type`: Specifies whether the data needs to be a number, an email address, or some other specific preset type.
* `pattern`: Specifies a regular expression that defines a pattern the entered data needs to follow.
If the data entered in an form field follows all of the rules specified by the above attributes, it is considered valid. If not, it is considered invalid.
When an element is valid, the following things are true:
* The element matches the `:valid` CSS *pseudo-class*, which lets you apply a specific style to valid elements.
* If the user tries to send the data, the browser will submit the form, provided there is nothing else stopping it from doing so (e.g. JavaScript).
When an element is invalid, the following things are true:
* The element matches the `:invalid` CSS *pseudo-class*, and sometimes other UI *pseudo-classes* (e.g. `:out-of-range`) depending on the error, which lets you apply a specific style to invalid elements.
* If the user tries to send the data, the browser will block the form and display an error message.

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.3 KiB

BIN
docs/img/css_box-model.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
docs/img/git_branches.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

BIN
docs/img/mongodb_ixscan.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

4
docs/index.md Normal file
View file

@ -0,0 +1,4 @@
# Programming Notes
Personal notes on various programming languages to be used as quick reference.
Sum-up of personal knowledge.

277
docs/ios/content-view.md Normal file
View file

@ -0,0 +1,277 @@
# ContentView
A page of the app.
## Views, Functions & Variables
`@State` allows the view to respond to every change of the annotated variable. This variables get initialized by the view in which they belong and are not "received" from external objects.
SwiftUI memorizes internally the value of the `@State` property and updates the view every time it changes.
`@Binding` is used for properties that are passed to the view from another. The receiving view can read the binding value, react to changes and modify it's value.
`@Binding` variables are passed with the prefix `$`,
### Simple View
- Simplest view.
- Permits the visualization of simple UIs.
- Constituted by a body of type `View`
```swift
struct SimpleViewName: View {
let CONSTANT: Type
@State var variable: Type
func func(){
@Binding var variable: Type
// code here
}
// property needed
var body: some View {
// view contents
}
}
```
### HStack, VStack, ZStack
Used to organize elements without dealing with constraints or forcing the visualization on devices wih different screen sizes.
```swift
struct ContentView: View {
var body: some View {
// cannot have multiple stack at the same level
VStack {
HStack {
View()
}
}
}
}
```
### Table View
Most common view to present array contents, it automatically handles the scrolling of the page with the *bounce* effect.
It can be integrated in a `NavigationView` to handle a `DetailView` of a selected item in the list.
The basic object that creates the table view is the `List()`. It's job is to create a "cell" for every element in the array.
The array can be filtered with a *search bar*.
The array elements can be grouped with the `Section()` object that groups cells under a common name in the table.
```swift
// view name can be any
struct TableView: View {
var array = [...]
var body: some View {
List(array) { iem in
TableCell(item: item)
}
}
}
// view name can be any
struct TableCell: View {
let item: Any
var body: some View {
// cell content
}
}
```
Every cell can have a link to visualize the details of the selected object. This is done by using `NavigationView` and `NavigationLink`.
The `NavigationView` contains the list and the property `.navigationBarTitle()` sets the view title.
It's possible to add other controls in the top part of the view (buttons, ...) using the property `.navigationBarItems()`.
```swift
struct ContentView: View {
let array = [...]
var body: some View {
NavigationView {
List(array) { item in
NavigationLink(destination: View()) {
// link UI
}
}.navigationBarTitle(Text("Title"))
}
}
}
```
### Tab Bar View
This view handles a bar on the bottom of the screen with links to simple or more complex views.
This is useful for designing pages that can be easily navigated by the user.
```swift
struct TabBarView: View {
var body: some View {
// first tab
Text("Tab Title")
.tabItem{
// tab selector design example
Image(systemImage: "house.fill")
Text("Home")
}
// n-th tab
Text("Tab Title")
.tabItem{
// tab selector design
}
}
}
```
The `TabBar` construction is made applying the `.tabItem{}` parameter to the object or page that the tab will link to.
It's possible to specify up to 5 `.tabItem{}` elements that will be displayed singularly in the `TabBar`.
From the 6th element onwards, the first 4 elements will appear normally, meanwhile the 5th will become a "more" element that will open a `TableView` with the list of the other `.tabItem{}` elements. This page permission to define which elements will be visible.
It's possible to integrate the NavigationView in the TabBar in two ways:
- inserting it as a container for the whole `TabBar` (at the moment of the transition to the detail page the `TabBar` will be hidden)
- inserting it as a container of a single `.tabItem{}` (the transition will happen inside the `TabBar` that will then remain visible)
## View Elements
### Text
```swift
Text("")
```
### Shapes
```swift
Rectangle()
Circle()
```
### Spacing
```swift
Divider()
Spacer()
```
### Image
[System Images](https://developer.apple.com/design/human-interface-guidelines/sf-symbols/overview/)
```swift
Image(systemName: "sfsymbol")
```
### Button
```swift
Button(action: { /* statements */ }) {
Text("Label")
//or
Image(systemName = "sfsymbol")
}
// button with alert popup
Button(action: { /* statements */ }) {
Text("abel")
}.action(isPresented: $boolBinding) {
Alert(title: Text("Alert Popup Title"), message: Text("Alert Message"))
}
```
### Style Options
Common syle options:
- `padding()`: adds an internal padding to the object.
- `foregroundColor()`: defines the color of the text or contained object.
- `background()`: defines the background color.
- `font()`: sets font type, size, weight, ...
- `cornerRadius()`: modifies the angles of the containing box.
- `frame()`: sets a fixed size for the object.
- `resizable()`, `scaleToFill()`, `scaleToFit()`: enables the resizing of an object inside another.
- `clipShape()`: overlays a shape over the object
- `overlay()`: overlays an element over the object, more complex than clipShape
- `shadow()`: Sets the object's shadow
- `lineLimit()`: limits the number of visible lines in `TextField`
```swift
View().styleOption()
// or
View {
}.styleOPtion()
```
## Forms & Input
```swift
Form {
Section (header: Text("Section Title")) {
// form components
}
}
```
### Picker
```swift
// list item picker
Picker(selction: $index, label: Text("Selection Title")) {
ForEach(0..<itemArray.count){
Text(itemArray[$0]).tag($0) // tag adds the index of the selected item to the info of the Text()
}
}
```
### Stepper
```swift
Stepper("\(number)", value: $number, in: start...end)
```
### TextField
```swift
TextField("Placeholder Text", text: $result)
.keyboardType(.<kb_type>)
```
### Slider
```swift
Slider(value: $numVariable)
```
## API Interaction
```swift
@State private var apiItems = [<struct>]()
// struct should be Identifiable & Codable
func loadData() {
guard let url = URL(string: "https://jsonplaceholder.typicode.com/todos")
else { print("Invalid URL") return }
let request = URLRequest(url: url)
URLSession.shared.dataTask(with: request) { data, response, error in
if let data = data {
if let response = try? JSONDecoder().decode([TaskEntry].self, from: data) {
DispatchQueue.main.async {
self.itemsApi = response
}
return
}
}
}.resume()
}
```

115
docs/java/dao.md Normal file
View file

@ -0,0 +1,115 @@
# Database Access Object
## DB
Connection to the DB.
```java
package dao;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class DB {
private Connection conn; // db connection obj
private final String URL = "jdbc:<dbms>:<db_url>/<database>";
private final String USER = "";
private final String PWD = "";
public DB() {
this.conn = null;
}
public void connect() {
try {
this.conn = DriverManager.getConnection(URL, USER, PWD);
} catch (SQLException e) {
e.printStackTrace();
}
}
public void disconnect() {
if(conn != null) {
try {
this.conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
public Connection getConn() {
return conn;
}
}
```
## `I<Type>DAO`
Interface for CRUD methods on a database.
```java
package dao;
import java.sql.SQLException;
import java.util.List;
public interface I<Type>DAO {
String RAW_QUERY = "SELECT * ..."
public Type Query() throws SQLException;
}
```
## `<Type>DAO`
Class implementing `I<Type>DAO` and handling the communication with the db.
```java
package dao;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class <Type>DAO implements I<Type>DAO {
private <Type> results;
private Statement statement; // SQL instruction container
private ResultSet rs; // Query results container
private DB db;
public <Type>DAO() {
db = new DB();
}
@Override
public Type Query() throws SQLException {
// 1. connection
db.connect();
// 2. instruction
statement = db.getConn().createStatement(); // statement creation
// 3. query
rs = statement.executeQuery(RAW_QUERY); // set the query
// 4. results
while(rs.next()) {
// create and valorize Obj
// add to results
}
// 5. disconnection
db.disconnect()
// 6. return results
return results;
}
}
```

View file

@ -0,0 +1,122 @@
# Java Collection Framework - JCF
All classes that permit the handling of groups of objects constitute the Java Collection Framework.
A Collection is a *container* in which several objects are grouped in a *single entity*.
The **Java Collection Framework** is constituted by:
- **Interfaces** the define the operations of a generic collection. They can be split into two categories:
- **Collection**: used to optimize operations of insertion, modification and deletion of elements in a group of objects.
- **Map**: optimized for look-up operations.
- **Classes** that implement the interfaces using different data structures.
- **Algorithms** consisting in methods to operate over a collection.
![Java Collection Hierarchy](../img/java_java-collection-framework.png "Java Collection Hierarchy")
## java.util.Collections
### Collection Functions
```java
boolean add (Object o) e.g., <x>.add (<y>) //append to collection, false if fails
boolean add (int index, Object o) //insertion at given index
boolean addAll (Collection c) //appends a collection to another
void clear() //remove items from container
boolean contains (Object o) //true if object is in collection
boolean containsAll (Collection c) //true if all items of collection are in another
boolean isEmpty (Object o) e.g., if (<x>.isEmpty()) ... //true if collection is empty
boolean remove (Object o) //remove object from collection
Object remove (int index) //remove object at given index
void removeAll (Collection c) //remove all items form collection
int size () //number og items in collection
Object [] toArray() //transform collection in array
Iterator iterator() //returns iterator to iterate over the collection
```
### Collections Methods
```java
Collection<E>.forEach(Consumer<? super T> action);
```
### Iterator
Abstracts the problem of iterating over all the elements of a collection;
- `public Iterator (Collection c)` creates the Iterator
- `public boolean hasNext()` checks if there is a successive element
- `public Object next()` extracts the successive element
### ArrayList
**Note**: ArrayLists can't contain *primitive* values. *Use wrapper classes* instead.
```java
import java.util.ArrayList;
ArrayList<Type> ArrayListName = new ArrayList<Type>(starting_dim); //resizable array
ArrayList<Type> ArrayListName = new ArrayList<Type>(); //resizable array
ArrayList<Type> ArrayListName = new ArrayList<>(); //resizable array (JAVA 1.8+)
ArrayListName.add(item); //append item to collection
ArrayListName.add(index, item); // add item at position index, shift all item from index and successive towards the end af the ArrayList
ArrayListName.set(index, item); // substitute EXISTING item
ArrayListName.get(index); //access to collection item
ArrayListName.remove(item) //remove first occurrence of item from collection
ArrayListName.remove(index) //remove item at position index
ArrayListName.clear() //empties the ArrayList
ArrayListName.contains(object); // check if object is in the ArrayList
ArrayListName.IndexOf(object); // returns the index of the object
ArrayListName.isEmpty(); // check wether the list is empty
ArrayListName.size(); //dimension of the ArrayList
ArrayListName.tirmToSize(); // reduce ArrayList size to minimum needed
// ArrayList size doubles when a resize is needed.
//run through to the collection with functional programming (JAVA 1.8+)
ArrayListName.forEach(item -> function(v));
```
### Collection Sorting
To sort a collection it's items must implement `Comparable<T>`:
```java
class ClassName implements Comparable<ClassName> {
@override
public int compareTo(Classname other){
//compare logic
return <int>;
}
}
List<ClassName> list;
//valorize List
Collections.sort(list); //"natural" sorting uses compareTo()
```
Otherwise a `Comparator()` must be implemented:
```java
class Classname {
//code here
}
// Interface object (!) implements directly a method
Comparator<ClassName> comparator = new Comparator<ClassName>() {
@Override
public int compare(ClassName o1, Classname o2) {
//compare logic
return <int>;
}
};
List<ClassName> list;
//valorize List
Collections.sort(list, comparator); //"natural" sorting uses compareTo()
```
`Comparator<T>` and `Comparable<T>` are functional interfaces

1229
docs/java/java.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,45 @@
# pom.xml
File specifing project dependencies.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>major.minor.patch</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>major.minor.patch.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>base_package</groupId>
<artifactId>Project_Name</artifactId>
<version>major.minor.patch</version>
<name>Project_Name</name>
<description>...</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
```

View file

@ -0,0 +1,203 @@
# Spring Project
## Libs
- MySql Driver
- Spring Data JPA (data persistance)
- Spring Boot Dev Tools
- Jersey (Serializzazione)
- Spring Web
## application.properties
```ini
spring.datasource.url=DB_url
spring.datasource.username=user
spring.datasource.password=password
spring.jpa.show-sql=true
server.port=server_port
```
## Package `entities`
Model of a table of the DB
```java
package <base_package>.entities;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
@Entity // set as DB Entity (DB record Java implementation)
public class Entity {
@Id // set as Primary Key
@GeneratedValue(strategy = GenerationType.IDENTITY) // id is autoincremented by the DB
private int id;
// no constructor (Spring requirement)
// table columns attributes
// getters & setters
// toString()
}
```
## Package `dal`
Spring Interface for DB connection and CRUD operations.
```java
package <base_package>.dal // or .repository
import org.springframework.data.repository.JpaRepository;
import org.springframework.data.repository.Query;
import org.springframework.data.repository.query.Param;
// interface for spring Hibernate JPA
// CrudRepository<Entity, PK_Type>
public interface IEntityDAO extends JpaRepository<Entity, Integer> {
// custom query
@Query("FROM <Entity> WHERE param = :param")
Type query(@Param("param") Type param);
}
```
## Package `services`
Interfaces and method to access the Data Access Layer (DAL).
In `IEntityService.java`:
```java
package <base_package>.services;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import <base_package>.Entity;
import <base_package>.IEntityRepository;
// CRUD method implemented on Entity
public interface IEntityService {
String FIND_ALL = "SELECT * FROM <table>"
String FIND_ONE = "SELECT * FROM <table> WHERE id = ?"
List<Entity> findAll();
Entity findOne(int id);
void addEntity(Entity e);
void updateEntity(int id, Entity e);
void deleteEntity(Entity e);
void deleteEntity(int id);
}
```
In `EntityService.java`:
```java
package <base_package>.services;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import <base_package>.entities.Entity;
import <base_package>.repos.ILibriRepository;
// implementation of IEntityService
@Service
public class EntityService implements IEntityService {
@Autowired // connection to db (obj created by spring as needed: Inversion Of Control)
private IEntityDAO repo;
@Override
public List<Entity> findAll() {
return repo.findAll();
}
@Override
public Entity findOne(int id) {
return repo.findById(id).get();
}
@Override
public void addEntity(Entity e) {
repo.save(e);
}
@Override
public void updateEntity(int id, Entity e) {
}
@Override
public void deleteEntity(Entity e) {
}
@Override
public void deleteEntity(int id) {
}
// custom query
Type query(Type param) {
return repo.query(param);
}
}
```
## Package `integration`
REST API routes & endpoints to supply data as JSON.
```java
package <base_package>.integration;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;
import <base_package>.entities.Entity;
import <base_package>.services.IEntityService;
@RestController // returns data as JSON
@RequestMapping("/api") // controller route
public class EntityRestCtrl {
@Autowired // connection to service (obj created by spring as needed: Inversion Of Control)
private IEntityService service;
@GetMapping("entities") // site route
public List<Entity> findAll(){
return service.findAll();
}
@GetMapping("/entities/{id}")
public Entity findOne(@PathVariable("id") int id) { // use route variable
return service.findOne(id);
}
@PostMapping("/entities")
public void addEntity(@RequestBody Entity e) { // added entity is in the request body
return service.addEntity(e)
}
// PUT / PATCH -> updateEntity(id, e)
// DELETE -> deleteEntity(id)
}
```

51
docs/java/web/servlet.md Normal file
View file

@ -0,0 +1,51 @@
# Servlet
A Java servlet is a Java software component that extends the capabilities of a server.
Although servlets can respond to many types of requests, they most commonly implement web containers for hosting web applications on web servers and thus qualify as a server-side servlet web API.
## Basic Structure
```java
package <package>;
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
@WebServlet("/route")
public class <className> extends HttpServlet {
private static final long serialVersionUID = 1L;
/** handle get request */
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// GET REQUEST: load and display page (forward to JSP)
// OPTIONAL: add data to request (setAttribute())
}
/** handle post request */
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// POST REQUEST: add stuff to DB, page, ...
doGet(request, response); // return same page with new content added (default case)
}
}
```
## Servlet Instructions
```java
request.getParameter() // read request parameter
response.setContentType("text/html"); // to return HTML in the response
response.getWriter().append(""); //append content to the http response
request.setAttribute(attributeName, value); // set http attribute of the request
request.getRequestDispatcher("page.jsp").forward(request, response); // redirect the request to another page
```

112
docs/javascript/ajax.md Normal file
View file

@ -0,0 +1,112 @@
# AJAX
**AJAX**: Asynchronous JavaScript and XML
AJAX Interaction:
1. An event occurs in a web page (the page is loaded, a button is clicked)
2. 2.An `XMLHttpRequest` object is created by JavaScript
3. 3.The `XMLHttpRequest` object sends a request to a web server
4. 4.The server processes the request
5. 5.The server sends a response back to the web page
6. 6.The response is read by JavaScript
7. 7.Proper action (like page update) is performed by JavaScript
## XMLHttpRequest
```js
var request = new XMLHttpRequest();
request.addEventListener(event, function() {...});
request.open("HttpMethod", "path/to/api", true); // third parameter is asynchronicity (true = asynchronous)
request.setRequestHeader(key, value) // HTTP Request Headers
request.send()
```
To check the status use `XMLHttpRequest.status` and `XMLHttpRequest.statusText`.
### XMLHttpRequest Events
**loadstart**: fires when the process of loading data has begun. This event always fires first
**progress**: fires multiple times as data is being loaded, giving access to intermediate data
**error**: fires when loading has failed
**abort**: fires when data loading has been canceled by calling abort()
**load**: fires only when all data has been successfully read
**loadend**: fires when the object has finished transferring data always fires and will always fire after error, abort, or load
**timeout**: fires when progression is terminated due to preset time expiring
**readystatechange**: fires when the readyState attribute of a document has changed
**Alternative `XMLHttpRequest` using `onLoad`**:
```js
var request = new XMLHttpRequest();
request.open('GET', 'myservice/username?id=some-unique-id');
request.onload = function(){
if(request.status ===200){
console.log("User's name is "+ request.responseText);
} else {
console.log('Request failed. Returned status of '+ request.status);
}
};
request.send();
```
**Alternative `XMLHttpRequest` using `readyState`**:
```js
var request = new XMLHttpRequest(), method ='GET', url ='https://developer.mozilla.org/';
request.open(method, url, true);
request.onreadystatechange = function(){
if(request.readyState === XMLHttpRequest.DONE && request.status === 200){
console.log(request.responseText);
}
};
request.send();
```
`XMLHttpRequest.readyState` values:
`0` `UNSENT`: Client has been created. `open()` not called yet.
`1` `OPENED`: `open()` has been called.
`2` `HEADERS_RECEIVED`: `send()` has been called, and headers and status are available.
`3` `LOADING`: Downloading; `responseText` holds partial data.
`4` `DONE`: The operation is complete.
### `XMLHttpRequest` Browser compatibility
Old versions of IE don't implement XMLHttpRequest. You must use the ActiveXObject if XMLHttpRequest is not available
```js
var request =window.XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject('Microsoft.XMLHTTP');
// OR
var request;
if(window.XMLHttpRequest){
// code for modern browsers
request = new XMLHttpRequest();
} else {
// code for old IE browsers
request = new ActiveXObject('Microsoft.XMLHTTP');
}
```
## Status & Error Handling
Always inform the user when something is loading. Check the status and give feedback (a loader or message)
Errors and responses need to be handled. There is no guarantee that HTTP requests will always succeed.
### Cross Domain Policy
Cross domain requests have restrictions.
Examples of outcome for requests originating from: `http://store.company.com/dir/page.htmlCross-origin`
| URL | Outcome | Reason |
|-------------------------------------------------|---------|--------------------|
| `http://store.company.com/dir2/other.html` | success |
| `http://store.company.com/dir/inner/other.html` | success |
| `https://store.company.com/secure.html` | failure | Different protocol |
| `http://store.company.com:81/dir/other.html` | failure | Different port |
| `http://news.company.com/dir/other.html` | failure | Different host |

164
docs/javascript/dom.md Normal file
View file

@ -0,0 +1,164 @@
# Document Object Model (DOM)
The **Document Object Model** is a *map* of the HTML document. Elements in an HTML document can be accessed, changed, deleted, or added using the DOM.
The document object is *globally available* in the browser. It allows to access and manipulate the DOM of the current web page.
## DOM Access
### Selecting Nodes from the DOM
`getElementById()` and `querySelector()` return a single element.
`getElementsByClassName()`, `getElementsByTagName()`, and `querySelectorAll()` return a collection of elements.
```js
Javascript
// By Id
var node = document.getElementById('id');
// By Tag Name
var nodes = document.getElementsByTagName('tag');
// By Class Name
var nodes = document.getElementsByClassName('class');
// By CSS Query
var node = document.querySelector('css-selector');
var nodes = document.querySelectorAll('css-selector');
```
## Manipulating the DOM
### Manipulating a node's attributes
It's possible access and change the attributes of a DOM node using the *dot notation*.
```js
// Changing the src of an image:
var image = document.getElementById('id');
var oldImageSource = image.src;
image.src = 'image-url';
//Changing the className of a DOM node:
var node = document.getElementById('id');
node.className = 'new-class';
```
### Manipulating a node's style
It's possible to access and change the styles of a DOM nodes via the **style** property.
CSS property names with a `-` must be **camelCased** and number properties must have a unit.
```css
body {
color: red;
background-color: pink;
padding-top: 10px;
}
```
```js
var pageNode = document.body;
pageNode.style.color = 'red';
pageNode.style.backgroundColor = 'pink';
pageNode.style.paddingTop = '10px';
```
### Manipulating a node's contents
Each DOM node has an `innerHTML` attribute. It contains the HTML of all its children.
```js
var pageNode = document.body;
console.log(pageNode.innerHTML);
// Set innerHTML to replace the contents of the node:
pageNode.innerHTML = "<h1>Oh, no! Everything is gone!</h1>";
// Or add to innerHTML instead:
pageNode.innerHTML += "P.S. Please do write back.";
```
To change the actual text of a node, `textContent` may be a better choice:
`innerHTML`:
- Works in older browsers
- **More powerful**: can change code
- **Less secure**: allows cross-site scripting (XSS)
`textContent`:
- Doesn't work in IE8 and below
- **Faster**: the browser doesn't have to parse HTML
- **More secure**: won't execute code
### Reading Inputs From A Form
In `page.html`:
```html
<input type="" id="identifier" value="">
```
In `script.js`:
```js
var formNode = document.getElementById("Identifier");
var value = formNode.value;
```
## Creating & Removing DOM Nodes
The document object also allows to create new nodes from scratch.
```js
// create node
document.createElement('tagName');
document.createTextNode('text');
domNode.appendChild(childToAppend); // insert childTaAppend after domNode
// insert node before domNode
domNode.insertBefore(childToInsert, domnode);
domNode.parentNode.insertBefore(childToInsert, nodeAfterChild);
// remove a node
domNode.removeChild(childToRemove);
node.parentNode.removeChild(node);
```
Example:
```js
var body = document.body;
var newImg = document.createElement('img');
newImg.src = 'http://placekitten.com/400/300';
newImg.style.border = '1px solid black';
body.appendChild(newImg);
var newParagraph = document.createElement('p');
var newText = document.createTextNode('Squee!');
newParagraph.appendChild(newText);
body.appendChild(newParagraph);
```
### Creating DOM Nodes with Constructor Functions
```js
function Node(params) {
this.node = document.createElement("tag");
this.node.attribute = value;
// operations on the node
return this.node;
}
var node = Node(params);
domElement.appendChild(node);
```

View file

@ -0,0 +1,88 @@
# Events & Animation
## Events
Event Types:
- **Mouse Events**: `mousedown`, `mouseup`, `click`, `dbclick`, `mousemove`, `mouseover`, `mousewheel`, `mouseout`, `contextmenu`, ...
- **Touch Events**: `touchstart`, `touchmove`, `touchend`, `touchcancel`, ...
- **Keyboard Events**: `keydown`, `keypress`, `keyup`, ...
- **Form Events**: `focus`, `blur`, `change`, `submit`, ...
- **Window Events**: `scroll`, `resize`, `hashchange`, `load`, `unload`, ...
### Managing Event Listeners
```js
var domNode = document.getElementById("id");
var onEvent = function(event) { // parameter contains info on the triggered event
event.preventDefault(); // block execution of default action
// logic here
}
domNode.addEventListener(eventType, callback);
domNode.removeEventListener(eventType, callback);
```
### Bubbling & Capturing
Events in Javascript propagate through the DOM tree.
[Bubbling and Capturing](https://javascript.info/bubbling-and-capturing)
[What Is Event Bubbling in JavaScript? Event Propagation Explained](https://www.sitepoint.com/event-bubbling-javascript/)
### Dispatching Custom Events
Event Options:
- `bubbles` (bool): whether the event propagates through bubbling
- `cancellable` (bool): if `true` the "default action" may be prevented
```js
let event = new Event(type [,options]); // create the event, type can be custom
let event = new CustomEvent(type, { detail: /* custom data */ }); // create event w/ custom data
domNode.dispatchEvent(event); // launch the event
```
![Event Inheritance](../img/javascript_event-inheritance.png)
## Animation
The window object is the assumed global object on a page.
Animation in JavascriptThe standard way to animate in JS is to use window methods.
It's possible to animate CSS styles to change size, transparency, position, color, etc.
```js
//Calls a function once after a delay
window.setTimeout(callbackFunction, delayMilliseconds);
//Calls a function repeatedly, with specified interval between each call
window.setInterval(callbackFunction, delayMilliseconds);
//To stop an animation store the timer into a variable and clear it
window.clearTimeout(timer);
window.clearInterval(timer);
// execute a callback at each frame
window.requestAnimationFrame(callbackFunction);
```
### Element Position & dimensions
[StackOverflow](https://stackoverflow.com/a/294273/8319610)
[Wrong dimensions at runtime](https://stackoverflow.com/a/46772849/8319610)
```js
> console.log(document.getElementById('id').getBoundingClientRect());
DOMRect {
bottom: 177,
height: 54.7,
left: 278.5,
right: 909.5,
top: 122.3,
width: 631,
x: 278.5,
y: 122.3,
}
```

View file

@ -0,0 +1,958 @@
# JavaScript
## Basics
### Notable javascript engines
- **Chromium**: `V8` from Google
- **Firefox**: `SpiderMonkey` from Mozilla
- **Safari**: `JavaScriptCore` from Apple
- **Internet Explorer**: `Chakra` from Microsoft
### Comments
```javascript
//single line comment
/*multiline comment*/
```
### File Header
```javascript
/**
* @file filename.js
* @author author's name
* purpose of file
*
* detailed explanation of what the file does on multiple lines
*/
```
### Modern Mode
If located at the top of the script the whole script works the “modern” way (enables post-ES5 functionalities).
```js
"use strict"
// script contents
```
### Pop-Up message
Interrupts script execution until closure, **to be avoided**
```javascript
alert("message");
```
### Print message to console
`console.log(value);`
## Variables
### Declaration & Initialization
[var vs let vs const](https://www.freecodecamp.org/news/var-let-and-const-whats-the-difference/)
Variable names can only contain numbers, digits, underscores and $. Variable names are camelCase.
`let`: Block-scoped; access to variable restricted to the nearest enclosing block.
`var`: Function-scoped
`let variable1 = value1, variable2 = value2;`
`var variable1 = value1, variable2 = value2;`
### Scope
Variable declared with `let` are in local to the code block in which are declared.
Variable declared with `var` are local only if declared in a function.
```js
function func(){
variable = value; // implicitly declared as a global variable
var variable = value; // local variable
}
var a = 10; // a is 10
let b = 10; // b is 10
{
var x = 2, a = 2; // a is 2
let y = 2, b = 2; // b is 2
}
// a is 2, b is 10
// x can NOT be used here
// y CAN be used here
```
### Constants
Hard-coded values are UPPERCASE and snake_case, camelCase otherwise.
`const CONSTANT = value;`
## Data Types
`Number`, `String`, `Boolean`, etc are *built-in global objects*. They are **not** types. **Do not use them for type checking**.
### Numeric data types
Only numeric type is `number`.
```javascript
let number = 10; //integer numbers
number = 15.7; //floating point numbers
number = Infinity; //mathematical infinity
number = - Infinity;
number = 1234567890123456789012345678901234567890n; //BigInt, value > 2^53, "n" at the end
number = "text" / 2; //NaN --> not a number.
```
[Rounding Decimals in JavaScript](https://www.jacklmoore.com/notes/rounding-in-javascript/)
[Decimal.js](https://github.com/MikeMcl/decimal.js)
Mathematical expression will *never* cause an error. At worst the result will be NaN.
### String data type
```javascript
let string = "text";
let string$ = 'text';
let string_ = `text ${expression}`; //string interpolation (needs backticks)
string.length; // length of the string
let char = string.charAt(index); // extraction of a single character by position
string[index]; // char extraction by property access
let index = string.indexOf(substring); // start index of substring in string
```
Property access is unpredictable:
- does not work in IE7 or earlier
- makes strings look like arrays (confusing)
- if no character is found, `[ ]` returns undefined, `charAt()` returns an empty string
- Is read only: `string[index] = "value"` does not work and gives no errors
### [Slice vs Substring vs Substr](https://stackoverflow.com/questions/2243824/what-is-the-difference-between-string-slice-and-string-substring)
If the parameters to slice are negative, they reference the string from the end. Substring and substr doesn´t.
```js
string.slice(begin [, end]);
string.substring(from [, to]);
string.substr(start [, length]);
```
### Boolean data type
```javascript
let boolean = true;
let boolean_ = false;
```
### Null data type
```javascript
let _ = null;
```
### Undefined
```javascript
let $; //value is "undefined"
$ = undefined;
```
### Typeof()
```javascript
typeof x; //returns the type of the variable x as a string
typeof(x); //returns the type of the variable x as a string
```
The result of typeof null is "object". That's wrong.
It is an officially recognized error in typeof, kept for compatibility. Of course, null is not an object.
It is a special value with a separate type of its own. So, again, this is an error in the language.
### Type Casting
```javascript
String(value); //converts value to string
Number(value); //converts value to a number
Number(undefined); //--> NaN
Number(null); //--> 0
Number(true); //--> 1
Number(false); //--> 0
Number(String); //Whitespace from the start and end is removed. If the remaining string is empty, the result is 0. Otherwise, the number is "read" from the string. An error gives NaN.
Boolean(value); //--> true
Boolean(0); //--> false
Boolean(""); //--> false
Boolean(null); //--> false
Boolean(undefined); //--> false
Boolean(NaN); //--> false
//numeric type checking the moronic way
typeof var_ == "number"; // typeof returns a string with the name of the type
```
### Type Checking
```js
isNaN(var); // converts var in number and then check if is NaN
Number("A") == NaN; //false ?!?
```
### Dangerous & Stupid Implicit Type Casting
```js
2 + 'text'; //"2text", implicit conversion and concatenation
1 + "1"; //"11", implicit conversion and concatenation
"1" + 1; //"11", implicit conversion and concatenation
+"1"; //1, implicit conversion
+"text"; // NaN
1 == "1"; //true
1 === "1"; //false
1 == true; //true
0 == false; //true
"" == false; //true
```
## Operators
| Operator | Operation |
| ------------ | --------------- |
| `(...)` | grouping |
| a`.`b | member access |
| `new` a(...) | object creation |
| a `in` b | membership |
### Mathematical Operators
| Operator | Operation |
| -------- | -------------- |
| a `+` b | addition |
| a `-` b | subtraction |
| a `*` b | multiplication |
| a `**` b | a^b |
| a `/` b | division |
| a `%` b | modulus |
### Unary Increment Operators
| Operator | Operation |
| ------------ | ----------------- |
| `--`variable | prefix decrement |
| `++`variable | prefix increment |
| variable`--` | postfix decrement |
| variable`++` | postfix increment |
### Logical Operators
| Operator | Operation |
| -------- | --------------- |
| a `&&` b | logical **AND** |
| a `||` b | logical **OR** |
| `!`a | logical **NOT** |
### Comparison Operators
| Operator | Operation |
| --------- | ------------------- |
| a `<` b | less than |
| a `<=` b | less or equal to |
| a `>` b | greater than |
| a `>=` b | greater or equal to |
| a `==` b | equality |
| a `!=` b | inequality |
| a `===` b | strict equality |
| a `!==` b | strict inequality |
### Bitwise Logical Operators
| Operator | Operation |
| --------- | ---------------------------- |
| a `&` b | bitwise AND |
| a `|` b | bitwise OR |
| a `^` b | bitwise XOR |
| `~`a | bitwise NOT |
| a `<<` b | bitwise left shift |
| a `>>` b | bitwise right shift |
| a `>>>` b | bitwise unsigned right shift |
### Compound Operators
| Operator | Operation |
| ---------- | ----------- |
| a `+=` b | a = a + b |
| a `-=` b | a = a - b |
| a `*=` b | a = a * b |
| a `**=` b | a = a ** b |
| a `/=` b | a = a / b |
| a `%=` b | a = a % b |
| a `<<=` b | a = a << b |
| a `>>=` b | a = a >> b |
| a `>>>=` b | a = a >>> b |
| a `&=` b | a = a & b |
| a `^=` b | a = a ^ b |
| a `|=` b | a = a ! b |
## Decision Statements
### IF-ELSE
```javascript
if (condition) {
//code here
} else {
//code here
}
```
### IF-ELSE Multi-Branch
```javascript
if (condition) {
//code here
} else if (condition) {
//code here
} else {
//code here
}
```
### Ternary Operator
`condition ? <expr1> : <expr2>;`
### Switch Statement
```javascript
switch (expression) {
case expression:
//code here
break;
default:
//code here
break;
}
```
## Loops
### While Loop
```javascript
while (condition) {
//code here
}
```
### Do-While Loop
```javascript
do {
//code here
} while (condition);
```
### For Loop
```javascript
// basic for
for (begin; condition; step) { }
for (var variable in iterable) { } // for/in statement loops through the properties of an object
for (let variable in iterable) { } // instantiate a new variable at each iteration
// for/of statement loops through the values of an iterable objects
// for/of lets you loop over data structures that are iterable such as Arrays, Strings, Maps, NodeLists, and more.
for (var variable of iterable) { }
for (let variable of iterable) { } // instantiate a new variable at each iteration
// foreach (similar to for..of)
iterable.forEach(() => { /* statements */ });
```
### Break & Continue statements
`break;` exits the loop.
`continue;` skip to next loop cycle.
```javascript
label: for(begin; condition; step) {
//code here
}
break label; //breaks labelled loop and nested loops inside it
```
## Arrays
```js
let array = []; // empty array
let array = ["text", 3.14, [1.41]]; // array declaration and initialization
array.length; // number of items in the array
array[index]; // access to item by index
array[index] = item; // change or add item by index
array.push(item); //add item to array
array.pop(); // remove and return last item
array.join("separator"); // construct a string from the items of the array, separated by SEPARATOR
array.find(item => condition); // returns the value of the first element in the provided array that satisfies the provided testing function
array.fill(value, start, end); // fills an array with the passed value
// https://stackoverflow.com/a/37601776
array.slice(start, end); // RETURN list of items between indexes start and end-1
array.splice(start, deleteCount, [items_to_add]); // remove and RETURN items from array, can append a list of items. IN PLACE operation
```
### `filter()` & `map()`, `reduce()`
```js
let array = [ items ];
// execute an operation on each item, producing a new array
array.map(function);
array.map(() => operation);
array.filter(() => condition); // return an items only if the condition is true
// execute a reducer function on each element of the array, resulting in single output value
array.reduce((x, y) => ...);
```
## Spread Operator (...)
```js
// arrays
let array1 = [ 1, 2, 3, 4, 5, 6 ];
let array2 = [ 7, 8, 9, 10 ];
let copy = [ ...array1 ]; // shallow copy
let copyAndAdd = [ 0, ...array1, 7 ]; // insert all values in new array
let merge = [ ...array1, ...array2 ]; // merge the arrays contents in new array
// objects
let obj = { prop1: value1, prop2: value2 };
let clone = { ...obj, prop: value }; // shallow copy, and update copy prop
let cloneAndAdd = { prop0: value0, ...obj, prop3: value3 };
// strings
let alphabet = "abcdefghijklmnopqrstxyz"
let letters = [ ...alphabet ]; // alphabet.split("")
//function arguments
let func = (arg1 = val1, arg2 = val2) => expression;
let args = [ value1, value2 ];
func(arg0, ...args);
```
## Dictionaries
```js
let dict = { FirstName: "Chris", "one": 1, 1: "some value" };
// add new or update property
dict["Age"] = 42;
// direct property by name
// because it's a dynamic language
dict.FirstName = "Chris";
```
### Iterating Key-Value pairs
```js
for(let key in dict) {
let value = dict[key];
// do something with "key" and "value" variables
}
Object.keys(dict).forEach(key => { });
```
## Functions
### JSDOC documentation standard
```javascript
/**
* @param {type} parameter - description
* @returns {type} parameter - description
* */
```
### Function Declaration
```javascript
// ...args will contain extra parameters (rest argument)
function functionName(parameter=default-value, ...args) {
//code here
return <expression>;
}
```
### Default Parameters (old versions)
```javascript
function functionName(parameters) {
if (parameter == undefined) {
parameter = value;
}
//code here
return <expression>;
}
```
### Function Expressions
```javascript
let functionName = function(parameters) {
//code here
return expression;
}
```
### Arrow Functions
```javascript
(input) => { /* statements */ }
(input) => expression;
input => expression; // parenthesis are optional
() => expression; // no parameters syntax
// variants
let func = (input) => {
// code here
};
let func = (input) => expression;
let func = input => expression;
func(); // function call
// return object literal
let func = (value) => ({property: value});
```
## Object Oriented Programming
An object is a collection of related data and/or functionality.
**Note**: It's not possible to transform a variable in an object simply by using the object assignment.
```js
let variable = value;
// object literal
let obj = {
property: value,
variable, // same as variable: variable
[property]: value // dynamic prop name
object: {
...
},
method: function() {
// code here
this.propertyName; // reference to object property inside the object
}
method; () => {
obj.propertyName; // this is undefined here, use full object name
}
};
// access to property (non existant properties will return Undefined)
obj.property; // dot notation
obj["property"]; // array notation
// property modification (will add property if missing)
obj.property = value; // dot notation
obj["property"] = value; // array notation
obj.func(); //method access
delete obj.propertyName; // delete property
Object.keys(obj); // list of all property names
Object.entries(obj); // list contents as key-value pairs
```
### Constructors and object instances
JavaScript uses special functions called **constructor functions** to define and initialize objects and their features.
Notice that it has all the features you'd expect in a function, although it doesn't return anything or explicitly create an object — it basically just defines properties and methods.
```js
// constructor function definition
function Class(params) {
this.property = param;
this.method = function(params) { /* code here */ }
}
let obj = new Class(params); // object instantiation
let obj = new Object(); // creates empty object
let obj = new Object({
// JSON
});
```
### Prototypes
Prototypes are the mechanism by which JavaScript objects *inherit* features from one another.
JavaScript is often described as a **prototype-based language**; to provide inheritance, objects can have a prototype object, which acts as a template object that it inherits methods and properties from.
An object's prototype object may also have a prototype object, which it inherits methods and properties from, and so on.
This is often referred to as a **prototype chain**, and explains why different objects have properties and methods defined on other objects available to them.
If a method is implemented on an object (and not it's prototype) then only that object will heve that method and not all the ones that come from the same prototype.
```js
// constructor function
function Obj(param1, ...) {
this.param1 = param1,
...
}
// method on the object
Obj.prototype.method = function(params) {
// code here (operate w/ this)
}
let obj = new Obj(args); // object instantiation
obj.method(); // call method from prototype
```
### Extending with prototypes
```js
// constructor function
function DerivedObj(param1, param2, ...) {
Obj.call(this, param1); // use prototype constructor
this.param2 = param2;
}
// extend Obj
DerivedObj.prototype = Object.create(Obj.prototype);
// method on object
DerivedObj.prototype.method = function() {
// code here (operate w/ this)
}
let dobj = new DerivedObj(args); // object instantiation
dobj.method(); // call method from prototype
```
### Classes (ES6+)
```js
class Obj {
constructor(param1, ...) {
this.param1 = param1,
...
}
get param1() // getter
{
return this.param1;
}
func() {
// code here (operate w/ this)
}
static func() { } // static method
// object instantiation
let obj = new Obj(param1, ...);
obj.func(); // call method
```
### Extending with Classes
```js
class DerivedObj extends Obj {
constructor(param1, param2, ...){
super(param1); // use superclass constructor
this.param2 = param2;
}
newFunc() { }
}
let dobj = DerivedObj();
dobj.newFunc();
```
## Deconstruction
### Object deconstruction
```js
let obj = {
property: value,
...
}
let { var1, var2 } = obj; // extract values from object into variables
let { property: var1, property2 : var2 } = obj; // extract props in variables w/ specified names
let { property: var1, var2 = default_value } = obj; // use default values if object has less then expected props
```
### Array Deconstruction
```js
let array = [ 1, 2, 3, 4, 5, 6 ];
let [first, , third, , seventh = "missing" ] = array; // extract specific values from array
```
## Serialization
```js
let object = {
// object attributes
}
let json = JSON.stringify(object); // serialize object in JSON
let json = { /* JSON */ };
let object = JSON.parse(json); // deserialize to Object
```
## Timing
### Timers
Function runs *once* after an interval of time.
```js
// param1, param2, ... are the arguments passed to the function (IE9+)
let timerId = setTimeout(func [, milliseconds, param1, param2, ... ]); // wait milliseconds before executing the code (params are read at execution time)
// works in IE9
let timerId = setTimeout(function(){
func(param1, param2);
}, milliseconds);
// Anonymous functions with arguments
let timerId = setTimeout(function(arg1, ...){
// code here
}, milliseconds, param1, ...);
clearTimeout(timerId) // cancel execution
// example of multiple consecutive schedules
let list = [1 , 2, 3, 4, 5, 6, 7, 8, 9, 10, "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", -1, -2, -3, -4, -5, -6, -7, -8, -9, -10]
function useTimeout(pos=0) {
setTimeout(function(){
console.log(list[pos]);
pos += 1; // update value for next call
if (pos < list.length) { // recursion exit condition
useTimeout(pos); // schedule next call with new value
}
}, 1_000, pos);
}
useTimeout();
```
### `let` vs `var` with `setTimeout`
```js
// let instantiates a new variable for each iteration
for (let i = 0; i < 3; ++i) {
setTimeout(function() {
console.log(i);
}, i * 100);
}
// output: 0, 1, 2
for (var i = 0; i < 3; ++i) {
setTimeout(function() {
console.log(i);
}, i * 100);
}
// output: 3, 3, 3
```
### Preserving the context
```js
let obj = {
prop: value,
method1 : function() { /* statement */ }
method2 : function() {
let self = this // memorize context inside method (otherwise callback will not know it)
setTimeout(function() { /* code here (uses self) */ })
}
}
// better
let obj = {
prop: value,
method1 : function() { /* statement */ }
method2 : function() {
setTimeout(() => { /* code here (uses this) */ }) // arrow func does not create new scope, this context preserved
}
}
```
### Intervals
Function runs regularly with a specified interval. JavaScript is **Single Threaded**.
```js
// param1, param2, ... are the arguments passed to the function (IE9+)
let timerId = setInterval(func, milliseconds [, param1, param2, ... ]); // (params are read at execution time)
// works in IE9
let timerId = setInterval(function(){
func(param1, param2);
}, milliseconds);
// Anonymous functions with arguments
let timerId = setInterval(function(arg1, ...){
// code here
}, milliseconds, param1, ...);
clearTimeout(timerId); // cancel execution
```
## DateTime
A date consists of a year, a month, a day, an hour, a minute, a second, and milliseconds.
There are generally 4 types of JavaScript date input formats:
- **ISO Date**: `"2015-03-25"`
- Short Date: `"03/25/2015"`
- Long Date: `"Mar 25 2015"` or `"25 Mar 2015"`
- Full Date: `"Wednesday March 25 2015"`
```js
// constructors
new Date();
new Date(milliseconds);
new Date(dateString);
new Date(year, month, day, hours, minutes, seconds, milliseconds);
// accepts parameters similar to the Date constructor, but treats them as UTC. It returns the number of milliseconds since January 1, 1970, 00:00:00 UTC.
Date.UTC(year, month, day, hours, minutes, seconds, milliseconds);
//static methods
Date.now(); // returns the number of milliseconds elapsed since January 1, 1970 00:00:00 UTC.
// methods
let date = new Date();
date.toSting(); // returns a string representing the specified Date object
date.toUTCString();
date.toDateString();
date.toTimeString(); // method returns the time portion of a Date object in human readable form in American English.
// get date
dare.getMonth();
date.getMinutes();
date.getFullYear();
// set date
date.setFullYear(2020, 0, 14);
date.setDate(date.getDate() + 50);
// parse valid dates
let msec = Date.parse("March 21, 2012");
let date = new Date(msec);
```
### Comparing Dates
Comparison operators work also on dates
```js
let date1 = new Date();
let date2 = new Date("May 24, 2017 10:50:00");
if(date1 > date2){
console.log('break time');
} else {
console.log('stay in class');
}
```
## [Exports](https://developer.mozilla.org/en-US/docs/web/javascript/reference/statements/export)
[Firefox CORS not HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSRequestNotHttp)
**NOTE**: Firefox 68 and later define the origin of a page opened using a `file:///` URI as unique. Therefore, other resources in the same directory or its subdirectories no longer satisfy the CORS same-origin rule. This new behavior is enabled by default using the `privacy.file_unique_origin` preference.
```json
"privacy.file_unique_origin": "false"
```
In `page.html`
```html
<script src="scripts/module.js"></script>
<script src="scripts/script.js"></script>
```
In `module.js`:
```js
// exporting individual fractures
export default function() {} // one per module
export func = () => expression; // zero or more per module
// Export list
export { name1, name2, …, nameN };
// Renaming exports
export { variable1 as name1, variable2 as name2, …, nameN };
// Exporting destructured assignments with renaming
export const { name1, name2: bar } = o;
// re-export
export { func } from "other_script.js"
```
In `script.js`:
```js
import default_func_alias, { func as alias } from "./module.js"; // import default and set alias
import { default as default_func_alias, func as alias } from "./module.js"; // import default and set alias
// use imported functions
default_func_alias();
alias();
```
```js
import * from "./module.js"; // import all
module.function(); // use imported content with fully qualified name
```

220
docs/javascript/jquery.md Normal file
View file

@ -0,0 +1,220 @@
# jQuery Library
## Including jQuery
### Download and link the file
```html
<head>
<script src="jquery-x.x.x.min.js"></script>
</head>
```
### Use a CDN
```html
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/x.x.x/jquery.min.js"></script>
</head>
<!-- OR -->
<head>
<script src="https://ajax.aspnetcdn.com/ajax/jQuery/jquery-x.x.x.min.js"></script>
</head>
```
### What is a CDN
A **content delivery network** or **content distribution network** (CDN) is a large distributed system of servers deployed in multiple data centers across the Internet.
The goal of a CDN is to serve content to end-users with high availability and high performance.
CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks.
## HTML Manipulation
### [Finding DOM elements](https://api.jquery.com/category/selectors/)
```js
$('tag');
$("#id");
$(".class");
```
### Manipulating DOM elements
```js
$("p").addClass("special");
```
```html
<!-- before -->
<p>Welcome to jQuery<p>
<!-- after -->
<p class="special">Welcome to jQuery<p>
```
### Reading Elements
```html
<a id="yahoo" href="http://www.yahoo.com" style="font-size:20px;">Yahoo!</a>
```
```js
// find it & store it
var link = $('a#yahoo');
// get info about it
link.html(); // 'Yahoo!'
link.attr('href'); // 'http://www.yahoo.com'
link.css('font-size'); // '20px
```
### Modifying Elements
```js
// jQuery
$('a').html('Yahoo!');
$('a').attr('href', 'http://www.yahoo.com');
$('a').css({'color': 'purple'});
```
```html
<!-- before -->
<a href="http://www.google.com">Google</a>
<!-- after -->
<a href="http://www.yahoo.com" style="color:purple">Yahoo</a>
```
### Create, Store, Manipulate and inject
```js
let paragraph = $('<p class="intro">Welcome<p>'); // create and store element
paragraph.css('property', 'value'); // manipulate element
$("body").append(paragraph); // inject in DOM
```
### Regular DOM Nodes to jQuery Objects
```js
var paragraphs = $('p'); // an array
var aParagraph = paragraphs[0]; // a regular DOM node
var $aParagraph = $(paragraphs[0]); // a jQuery Object
// can also use loops
for(var i = 0; i < paragraphs.length; i++) {
var element = paragraphs[i];
var paragraph = $(element);
paragraph.html(paragraph.html() + ' WOW!');
}
```
## [Events](https://api.jquery.com/category/events/)
```js
var onButtonClick = function() {
console.log('clicked!');
}
// with named callback & .on
$('button').on('click', onButtonClick);
// with anonymous callback & .on
$('button').on('click', function(){
console.log('clicked!');
});
// with .click & named callback
$('button').click(onButtonClick);
```
### Preventing Default Event
```js
$('selector').on('event', function(event) {
event.preventDefault();
// custom logic
});
```
## Plugins
In the HTML, add a `<script>` ag that hotlinks to the CDN or source file:
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-validate/1.17.0/jquery.validate.min.js"><script>
```
In the JavaScript call the jQuery plugin on the DOM:
```js
$("form").validate();
```
**NOTE**: always link to the [minified](https://developers.google.com/speed/docs/insights/MinifyResources) js files.
## More jQuery
### Patters & Anti-Patterns
```js
// Pattern: name variables with $var
$myVar =$('#myNode');
// Pattern: store references to callback functions
var callback = function(argument){
// do something cool
};
$(document).on('click', 'p', myCallback);
// Anti-pattern: anonymous functions
$(document).on('click', 'p', function(argument){
// do something anonymous
});
```
### Chaining
```js
banner.css('color', 'red');
banner.html('Welcome!');
banner.show();
// same as:
banner.css('color', 'red').html('Welcome!').show();
// same as:
banner.css('color', 'red')
.html('Welcome!')
.show();
```
### DOM Readiness
DOM manipulation and event binding doesn't work if the `<script>` is in the `<head>`
```js
$(document).ready(function() {
// the DOM is fully loaded
});
$(window).on('load', function(){
// the DOM and all assets (including images) are loaded
});
```
## AJAX (jQuery `1.5`+)
```js
$.ajax({
method: 'POST',
url: 'some.php',
data: { name: 'John', location: 'Boston'}
})
.done(function(msg){alert('Data Saved: '+ msg);})
.fail(function(jqXHR, textStatus){alert('Request failed: '+ textStatus);});
```

View file

@ -0,0 +1,103 @@
# [React Router](https://reactrouter.com)
Popular routing library. Allows to specify a route through React components, declaring which component is to be loaded for a given URL.
Key Components:
- **Router**: wrap the app entry-point, usually `BrowserRouter`
- **Route**: "Load this component for this URL"
- **Link**: react-managed anchors that won't post back to the browser
## Routers
Router Types:
- *HashRouter*: `#route`, adds hashes to the URLs
- *BrowserRouter*: `/route`, uses HTML5 history API to provide clean URLs
- *MemoryRouter*: no URL
```js
// index.js
//other imports ...
import { BrowserRouter as Router } from "react-router-dom";
React.render(
<Router>
<App />
</Router>,
document.getElementById("DomID");
)
```
```js
// Component.js
import { Route, Route } from "react-router-dom";
<div>
{/* match route pattern exactly, all sub-routes will be matched otherwise */}
<Route path="/" exact element={<Component props={props} />} />
<Route path="/route" element={<Component props={props} />} />
...
</div>
// only one child can match, similar to Route-case
<Routes>
<Route path="/" exact element={<Component props={props} />} />
<Route path="/route" element={<Component props={props} />} />
<Route component={PageNotFound} /> {/* matches all non-existent URLs */}
</Route>
```
### URL Parameters & Query String
```js
// Given
<Route path="/route/:placeholder" element={<Component props={props} />} />
// URL: app.com/route/sub-route?param=value
function Component(props) {
props.match.params.placeholder; // sub-route
props.location.query; // { param: value }
props.location.pathname; // /route/sub-route?param=value
}
```
### Redirecting
```js
import { Navigate } from "react-router-dom";
// redirects to another URL on render, shouldn't be rendered on component mount but after an action
<Navigate to="/route" />
<Navigate from="/old-route" to="/new-route" />
{ condition && <Navigate to="/route" /> } // redirect if condition is true
// or redirect manipulating the history (always in props)
props.history.push("/new-route");
```
### Prompts
```js
import { Prompt } from "react-router-dom";
// displays a prompt when the condition is true
<Prompt when={condition} message="prompt message" />
```
## Link
Clicks on a link created with React-Router will be captured by react an all the routing will happen client side.
```js
import { Link } from "react-router-dom";
// TARGET: <Route path="/route/:itemId" />
<Link to="/route/1">Text</Link>
// add styling attributes to the rendered element when it matches the current URL.
<NavLink to="/route" exact activeClassName="class">Text</NavLink>
<NavLink to="/route" activeStyle={ { cssProp: value } }>Text</NavLink>
```

View file

@ -0,0 +1,150 @@
# Testing React
## [Jest](https://jestjs.io/)
### Jest Configuration
```js
// jest.config.js
module.exports = {
testEnvironment: 'jsdom',
moduleFileExtensions: ['js', 'jsx', 'ts', 'tsx', 'json', 'node'],
setupFilesAfterEnv: ['@testing-library/jest-dom/extend-expect'], // add testing-library methods to expect()
transform: { '^.+\\.tsx?$': 'ts-jest'} // use ts-jest fo ts files
}
```
### Jest Tests
[Expect docs](https://jestjs.io/docs/expect)
```js
// .spec.js or .test.js
it("test description", () => {
// test body
expect(expected).toEqual(actual);
});
// group related tests
describe("test group name", () => {
it(/* ... */);
it(/* ... */);
});
```
### Snapshots
In `Component.Snapshots.js`:
```js
import React from "react";
import renderer from "react-test-renderer";
import Component from "./path/to/Component";
// import mock data if necessary
it("test description", () => {
// renders the DOM tree of the component
const tree = renderer.create(<Component funcProp={jest.fn() /* mock function */} /* component props */ />);
// save a snapshot of the component at this point in time ( in __snapshots__ folder)
// in future test it will be checked to avoid regressions
// can be updated during jest --watch pressing "u"
expect(tree).matchSnapshot();
});
```
---
## [Enzyme](https://enzymejs.github.io/enzyme/)
### Enzyme Configuration
```js
// testSetup.js
import { configure } from "enzyme";
import Adapter from "enzyme-adapter-react-<version>";
configure({ adapter: new Adapter() });
```
### Enzyme Tests
In `Component.test.js`:
```js
import React from "react";
import { shallow, mount } from "enzyme";
// eventual wrapper components (react-router, react-redux's provider, ...) for mount render
// shallow renders single component w/o children, no DOM generated
// mount renders component w/ it's children
import Component from "./path/to/Component";
// factory to setup shallow test easily
function testHelper(args) {
const defaultProps = { /* default value for props in each test */ };
const props = { ...defaultProps, ...args };
return shallow(<Component {...props} />);
}
// shallow rendering test
it("test description", () => {
const dom = testHelper(/* optional args */);
// or
const dom = shallow(<Component /* props */ />);
// check a property of expected component
// selector can be from raw JSX (name of a component)
expect(dom.find("selector").property).toBe(expected);
});
// mount rendering test
if("test description" () => {
const dom = mount(
<WrapperComponent>
<Component /* props *//>
</WrapperComponent>
);
// selector has to be HTML selector since the component is rendered completely
// possible to test child components
expect(dom.find("selector").property).toBe(expected);
});
```
---
## [React Testing Library](https://testing-library.com/docs/react-testing-library/intro/)
Encourages to write test based on what the user sees. So components are always *mounted* and fully rendered.
### React Testing Library Tests
In `Components.test.js`:
```js
import React from "react";
import { cleanup, render } from "@testing-library/react";
import Component from "./path/to/Component";
afterEach(cleanup);
// factory to setup test easily
function testHelper(args) {
const defaultProps = { /* default value for props in each test */ };
const props = { ...defaultProps, ...args };
return render(<Component {...props} />);
}
it("test description", () => {
const { getByText } = testHelper();
// react testing library func
getByText("text"); // check if test is present in the rendered component
});
```

View file

@ -0,0 +1,261 @@
# React
## Components
There are two types of react components:
- Function Components
- Class Components
Both types can be stateful and have side effects or be purely presentational.
```jsx
// functional component
const Component = (props) => {
return (
<domElementOrComponent... />
);
}
// class component
class Component extends React.Component {
return (
<domElementOrComponent... />
);
}
```
*NOTE*: a component name *must* start with an uppercase letter.
Every components has two inputs: *props* and *state*. The props input is explicit while the state is implicit.
State is used to determine the changes and when to re-render.
Within the component state can be changed while the props object represent fixed input values.
JSX syntax can represent HTML but gets converted to pure JavaScript before being sent to the browser:
```js
// JSX
const element = (
<h1 className="greeting">Hello, world!</h1>
);
// compiled JS shipped to browser
const element = React.createElement(
'h1', // HTML tag name
{className: 'greeting'}, // attrs as JSON
'Hello, world!' // tag content (can be nested component)
);
```
### App Entry-point
```js
const container = document.getElementById('root')!;
const root = createRoot(container);
const element = <h1s>Hello World</h1>
root.render(element)
```
### Dynamic Expressions
```js
<tag>{expression}</tag> // expression is evaluated an it's result is displayed
<tag onEvent={funcReference}>{expression}</tag>
<tag onEvent={() => func(args)}>{expression}</tag>
```
### Props
```js
<Component propName={value} /> // pass a value the component
<Component propName={funcReference} /> // pass a function to the component
function Component(props) {
// use props with {props.propName}
}
class Component extends React.Component{
// use props with {this.props.propName}
render()
}
```
### Simple Function Component
```js
// Button.js
import { useState } from "react";
function Button() {
const [count, setCount] = useState(0); // hook
const handleCLick = () => setCount(count + 1); // logic
// JSX
return (
<button onClick={handleCLick}>
{count}
</button>
);
}
export default Button;
```
### Simple Class Component
```js
class Button extends React.Component {
state = {count: 0};
//or
constructor(props) {
super(props);
this.state = {count: 0};
}
componentDidMount() {} // called on successful component mount
handleClick = () => {
this.setState({ count: this.state.count + 1 });
}
// or
handleClick = () => {
this.setState((state, props) => ({ count: state.count + props.increment }) );
}
render(){
return (
<button onClick={this.handleClick}>
{this.state.count}
</button>
);
}
}
```
### Nesting Components
```js
import { useState } from "react";
function Button(props) {
return (
<button onClick={props.onClickFunc}>
+1
</button>
);
}
function Display (props) {
return (
<div>{props.message}</div>
);
}
function App() {
// state must be declare in the outer component it can be passed to each children
const [count, setCount] = useState(0);
const incrementCounter = () => setCount(count + 1);
return (
<div className="App">
<Button onClickFunc={incrementCounter}/>
<Display message={count}/>
</div>
);
}
export default App;
```
### User Input (Forms)
```js
function Form() {
const [userName, setUserName] = useState("");
handleSubmit = (event) => {
event.preventDefault();
// ...
}
return(
<form onSubmit={handleSubmit}>
<input
type="text"
value={userName} // controlled component
onChange={(event) => setUserName(event.target.value)} // needed to update UI on dom change
required
/>
<button></button>
</form>
);
}
```
### Lists of Components
```js
// ...
<div>
{array.map(item => <Component key={uniqueID}>)}
</div>
// ...
```
**NOTE**: The `key` attribute of the component is needed to identify a particular item. It's most useful if the list has to be sorted.
## Hooks
### `useState`
Hook used to create a state object.
`useState()` results:
- state object (getter)
- updater function (setter)
```js
const [state, setState] = useState(default);
```
### `useEffect`
Hook used to trigger an action on each render of the component or when one of the watched items changes.
```js
useEffect(() => {
// "side effects" operations
return () => {/* clean up side effect */} // optional
}, [/* list of watched items, empty triggers once */]);
```
### Custom Hooks
```js
// hook definitions
const useCustomHook = () => {
// eventual state definitions
// eventual function definitions
// ...
return { obj1, obj2, ... };
}
const Component(){
// retrieve elements from the hook
const {
obj1,
obj2,
...
} = useCustomHook();
}
```

View file

@ -0,0 +1,156 @@
# Redux Testing
## Tests for Connected Components
Connected components are wrapped in a call to `connect`. Way of solving the problem:
- Wrap component with `<Provider>`. Added benefit: new store dedicated to tests.
- Add named export for unconnected component.
In `Component.js`:
```js
export function Component(props) { /* ... */ } // export unconnected component
export default connect(mapStateToProps, mapDispatchToProps)(Component) // default export of connected component
```
In `Component.test.js`:
```js
import React from "react";
// import enzyme or react testing library
// import mock data
import { Component } from "path/to/Component"; // import unconnected component
// factory to setup test easily
function testHelper(args) {
const defaultProps = {
/* default value for props in each test and required props */,
history = {} // normally injected by react-router, could also import the router
};
const props = { ...defaultProps, ...args };
return mount(<Component {...props} />); // or render if using react testing library
}
it("test description", () => {
const dom = testHelper();
// simulate page iteration
dom.find("selector").simulate("<event>");
// find changed component
// test expected behaviour of component
});
```
## Tests for Action Creators
```js
import * as actions from "path/to/actionCreators";
// import eventual action types constants
// import mock data
it("test description", () => {
const data = /* mock data */
const expectedAction = { type: TYPE, /* ... */ };
const actualAction = actions.actionCreator(data);
expect(actualAction).toEqual(expectedAction);
});
```
## Tests for Reducers
```js
import reducer from "path/to/reducer";
import * as actions from "path/to/actionCreators";
it("test description", () => {
const initialState = /* state before the action */;
const finalState = /* expected state after the action */
const data = /* data passed to the action creator */;
const action = actions.actionCreator(data);
const newState = reducer(initialState, action);
expect(newState).toEqual(finalState);
});
```
## Tests for the Store
```js
import { createStore } from "redux";
import rootReducer from "path/to/rootReducer";
import initialState from "path/to/initialState";
import * as actions from "path/to/actionCreators";
it("test description", () => {
const store = createStore(storeReducer, initialState);
const expectedState = /* state after the update */
const data = /* action creator input */;
const action = actions.actionCreator(data);
store.dispatch(action);
const state = store.getState();
expect(state).toEqual(expectedState);
});
```
## Tests for Thunks
Thunk testing requires the mocking of:
- store (using `redux-mock-store`)
- HTTP calls (using `fetch-mock`)
```js
import thunk from "redux-thunk";
import fetchMock from "fetch-mock";
import configureMockStore from "redux-mock-store";
// needed for testing async thunks
const middleware = [thunk]; // mock middlewares
const mockStore = configureMockStore(middleware); // mock the store
import * as actions from "path/to/actionCreators";
// import eventual action types constants
// import mock data
describe("Async Actions", () => {
afterEach(() => {
fetchMock.restore(); // init fetch mock for each test
});
it("test description", () => {
// mimic API call
fetchMock.mock(
"*", // capture any fetch call
{
body: /* body contents */,
headers: { "content-type": "application/json" }
}
);
// expected action fired from the thunk
const expectedActions = [
{ type: TYPE, /* ... */ },
{ type: TYPE, /* ... */ }
];
const store = mockStore({ data: value, ... }); // init mock store
return store.dispatch(actions.actionCreator()) // act
.then(() => {
expect(store.getActions()).toEqual(expectedActions); // assert
});
});
});
```

View file

@ -0,0 +1,465 @@
# [Redux](https://redux.js.org/)
Redux is a pattern and library for managing and updating application state, using events called *actions*. It serves as a centralized store for state that needs to be used across the entire application, with rules ensuring that the state can only be updated in a predictable fashion.
## Actions, Store, Immutability & Reducers
### Actions & Action Creators
An **Action** is a plain JavaScript object that has a `type` field. An action object can have other fields with additional information about what happened.
By convention, that information is stored in a field called `payload`.
**Action Creators** are functions that create and return action objects.
```js
function actionCreator(data)
{
return { type: ACTION_TYPE, payload: data }; // action obj
}
```
### Store
The current Redux application state lives in an object called the **store**.
The store is created by passing in a reducer, and has a method called `getState` that returns the current state value.
The Redux store has a method called `dispatch`. The only way to update the state is to call `store.dispatch()` and pass in an action object.
The store will run its reducer function and save the new state value inside.
**Selectors** are functions that know how to extract specific pieces of information from a store state value.
In `initialState.js`;
```js
export default {
// initial state here
}
```
In `configStore.js`:
```js
// configStore.js
import { createStore, applyMiddleware, compose } from "redux";
export function configStore(initialState) {
const composeEnhancers =
window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose; // support for redux devtools
return createStore(
rootReducer,
initialState,
composeEnhancers(applyMiddleware(middleware, ...))
);
}
// available functions & methods
replaceReducer(newReducer); // replace an existing reducer, useful for Hot Reload
store.dispatch(action); // trigger a state change based on an action
store.subscribe(listener);
store.getState(); // retrieve current state
```
### Reducers
**Reducers** are functions that receives the current state and an action, decide how to update the state if necessary, and return the new state.
Reducers must **always** follow some specific rules:
- They should only calculate the new state value based on the `state` and `action` arguments
- They are not allowed to modify the existing `state`.
Instead, they must make *immutable updates*, by copying the existing `state` and making changes to the copied values.
- They must not do any asynchronous logic, calculate random values, or cause other "side effects"
```js
import initialState from "path/to/initialState";
function reducer(state = initialState, action) {
switch(action.type){
case "ACTION_TYPE":
return { ...state, prop: value }; // return modified copy of state (using spread operator)
break;
default:
return state; // return unchanged state (NEEDED)
}
}
// combining reducers
import { combineReducers } from "redux";
const rootReducer = combineReducers({
entity: entityReducer.
...
});
```
**NOTE**: multiple reducers can be triggered by the same action since each one operates on a different portion of the state.
## [React-Redux](https://react-redux.js.org/)
### Container vs Presentational Components
Container Components:
- Focus on how thing work
- Aware of Redux
- Subscribe to Redux State
- Dispatch Redux actions
Presentational Components:
- Focus on how things look
- Unaware of Redux
- Read data from props
- Invoke callbacks on props
### Provider Component & Connect
Used at the root component and wraps all the application.
```js
// index.js
import React from 'react';
import ReactDOM from 'react-dom';
import { Provider } from 'react-redux';
import { configStore } from 'path/to/configStore';
import initialState from "path/to/initialState";
import App from './App';
const store = configStore(initialState);
const rootElement = document.getElementById('root');
ReactDOM.render(
<Provider store={store}>
<App />
</Provider>,
rootElement
);
```
```js
// Component.js
import { connect } from 'react-redux';
import { increment, decrement, reset } from './actionCreators';
// const Component = ...
// specifies which state is passed to the component (called on state change)
const mapStateToProps = (state, ownProps /* optional */) => {
// structure of the props passed to the component
return { propName: state.property };
};
// specifies the action passed to a component (the key is the name that the prop will have )
const mapDispatchToProps = { actionCreator: actionCreator };
// or
function mapDispatchToProps(dispatch) {
return {
// wrap action creators
actionCreator: (args) => dispatch(actionCreator(args))
};
}
// or
function mapDispatchToProps(dispatch) {
return {
actionCreator: bindActionCreators(actionCreator, dispatch),
actions: bindActionCreators(allActionCreators, dispatch)
};
}
// both args are optional
// if mapDispatch is missing the dispatch function is added to the props
export default connect(mapStateToProps, mapDispatchToProps)(Component);
```
## Async Operations with [Redux-Thunk](https://github.com/reduxjs/redux-thunk)
**Note**: Redux middleware runs *after* and action and *before* it's reducer.
Redux-Thunk allows to return functions instead of objects from action creators.
A "thunk" is a function that wraps an expression to delay it's evaluation.
In `configStore.js`:
```js
import { createStore, applyMiddleware, compose } from "redux";
import thunk from "redux-thunk";
function configStore(initialState) {
const composeEnhancers =
window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose; // support for redux devtools
return createStore(
rootReducer,
initialState,
composeEnhancers(applyMiddleware(thunk, ...)) // add thunks middleware
);
}
```
```js
// usually action on async func success
function actionCreator(arg) {
return { type: TYPE, data: arg };
}
export function thunk() {
return function (dispatch) { // redux-thunk injects dispatch as arg
return asyncFunction().then((data) => { // async function returns a promise
dispatch(actionCreator(data));
})
.catch((error) => {
throw error;
});
};
}
// or using async/await
export async function thunk() {
return function (dispatch) { // redux-thunk injects dispatch as arg
try {
let data = await asyncFunction();
return dispatch(actionCreator(data));
} catch(error) {
throw error;
}
}
}
```
## [Redux-Toolkit](https://redux-toolkit.js.org/)
The Redux Toolkit package is intended to be the standard way to write Redux logic. It was originally created to help address three common concerns about Redux.
Redux Toolkit also includes a powerful data fetching and caching capability dubbed "RTK Query". It's included in the package as a separate set of entry points. It's optional, but can eliminate the need to hand-write data fetching logic yourself.
These tools should be beneficial to all Redux users. Whether you're a brand new Redux user setting up your first project, or an experienced user who wants to simplify an existing application, Redux Toolkit can help you make your Redux code better.
Installation
Using Create React App
The recommended way to start new apps with React and Redux is by using the official Redux+JS template or Redux+TS template for Create React App, which takes advantage of Redux Toolkit and React Redux's integration with React components.
```sh
# Redux + Plain JS template
npx create-react-app my-app --template redux
# Redux + TypeScript template
npx create-react-app my-app --template redux-typescript
```
Redux Toolkit includes these APIs:
- [`configureStore()`][cfg_store]: wraps `createStore` to provide simplified configuration options and good defaults.
It can automatically combines slice reducers, adds whatever Redux middleware supplied, includes redux-thunk by default, and enables use of the Redux DevTools Extension.
- [`createReducer()`][new_reducer]: that lets you supply a lookup table of action types to case reducer functions, rather than writing switch statements.
In addition, it automatically uses the `immer` library to let you write simpler immutable updates with normal mutative code, like `state.todos[3].completed = true`.
- [`createAction()`][new_action]: generates an action creator function for the given action type string.
The function itself has `toString()` defined, so that it can be used in place of the type constant.
- [`createSlice()`][new_slice]: accepts an object of reducer functions, a slice name, and an initial state value, and automatically generates a slice reducer with corresponding action creators and action types.
- [`createAsyncThunk`][new_async_thunk]: accepts an action type string and a function that returns a promise, and generates a thunk that dispatches pending/fulfilled/rejected action types based on that promise
- [`createEntityAdapter`][entity_adapt]: generates a set of reusable reducers and selectors to manage normalized data in the store
- The `createSelector` utility from the Reselect library, re-exported for ease of use.
[cfg_store]: https://redux-toolkit.js.org/api/configureStore
[new_reducer]: https://redux-toolkit.js.org/api/createReducer
[new_action]: https://redux-toolkit.js.org/api/createAction
[new_slice]: https://redux-toolkit.js.org/api/createSlice
[new_async_thunk]: https://redux-toolkit.js.org/api/createAsyncThunk
[entity_adapt]: https://redux-toolkit.js.org/api/createEntityAdapter
### [`configureStore`](https://redux-toolkit.js.org/api/configureStore)
Included Default Middleware:
- Immutability check middleware: deeply compares state values for mutations. It can detect mutations in reducers during a dispatch, and also mutations that occur between dispatches.
When a mutation is detected, it will throw an error and indicate the key path for where the mutated value was detected in the state tree. (Forked from `redux-immutable-state-invariant`.)
- Serializability check middleware: a custom middleware created specifically for use in Redux Toolkit
Similar in concept to `immutable-state-invariant`, but deeply checks the state tree and the actions for non-serializable values such as functions, Promises, Symbols, and other non-plain-JS-data values
When a non-serializable value is detected, a console error will be printed with the key path for where the non-serializable value was detected.
- In addition to these development tool middleware, it also adds `redux-thunk` by default, since thunks are the basic recommended side effects middleware for Redux.
Currently, the return value of `getDefaultMiddleware()` is:
```js
// development
const middleware = [thunk, immutableStateInvariant, serializableStateInvariant]
// production
const middleware = [thunk]
```
```js
import { combineReducers } from 'redux'
import { configureStore } from '@reduxjs/toolkit'
import monitorReducersEnhancer from './enhancers/monitorReducers'
import loggerMiddleware from './middleware/logger'
import usersReducer from './usersReducer'
import postsReducer from './postsReducer'
const rootReducer = combineReducers({
users: usersReducer,
posts: postsReducer,
})
const store = configureStore({
// reducers combined automatically
reducer: rootReducer,
middleware: (getDefaultMiddleware) => getDefaultMiddleware().concat(loggerMiddleware),
enhancers: [monitorReducersEnhancer]
})
export default store
```
### [`createAction`](https://redux-toolkit.js.org/api/createAction)
```js
import { createAction } from '@reduxjs/toolkit';
const increment = createAction<number | undefined>('counter/increment');
const action = increment(); // { type: 'counter/increment' }
const action = increment(3); // { type: 'counter/increment', payload: 3 }
increment.toString(); // 'counter/increment'
```
### [`createReducer`](https://redux-toolkit.js.org/api/createReducer)
```js
import { createAction, createReducer } from '@reduxjs/toolkit'
interface CounterState {
value: number
}
const increment = createAction('counter/increment')
const decrement = createAction('counter/decrement')
const incrementByAmount = createAction<number>('counter/incrementByAmount')
const initialState = { value: 0 } as CounterState
const counterReducer = createReducer(initialState, (builder) => {
builder
.addCase(increment, (state, action) => {
state.value++
})
.addCase(decrement, (state, action) => {
state.value--
})
.addCase(incrementByAmount, (state, action) => {
state.value += action.payload
})
})
```
### [`createSlice`](https://redux-toolkit.js.org/api/createSlice)
A function that accepts an initial state, an object of reducer functions, and a "slice name", and automatically generates action creators and action types that correspond to the reducers and state.
Internally, it uses `createAction` and `createReducer`, so it's possible to use Immer to write "mutating" immutable updates.
**Note**: action types will have the `<slice-name>/<reducer-name>` shape.
```js
import { createSlice, PayloadAction } from '@reduxjs/toolkit'
interface CounterState {
value: number
}
const initialState = { value: 0 } as CounterState
const counterSlice = createSlice({
name: 'counter',
initialState,
reducers: {
increment(state) {
state.value++
},
decrement(state) {
state.value--
},
incrementByAmount(state, action: PayloadAction<number>) {
state.value += action.payload
},
},
})
export const { increment, decrement, incrementByAmount } = counterSlice.actions
export default counterSlice.reducer
```
### [`createAsyncThunk`](https://redux-toolkit.js.org/api/createAsyncThunk)
The function `createAsyncThunk` returns a standard Redux thunk action creator.
The thunk action creator function will have plain action creators for the pending, fulfilled, and rejected cases attached as nested fields.
The `payloadCreator` function will be called with two arguments:
- `arg`: a single value, containing the first parameter that was passed to the thunk action creator when it was dispatched.
- `thunkAPI`: an object containing all of the parameters that are normally passed to a Redux thunk function, as well as additional options:
- `dispatch`: the Redux store dispatch method
- `getState`: the Redux store getState method
- `extra`: the "extra argument" given to the thunk middleware on setup, if available
- `requestId`: a unique string ID value that was automatically generated to identify this request sequence
- `signal`: an `AbortController.signal` object that may be used to see if another part of the app logic has marked this request as needing cancellation.
- [...]
The logic in the `payloadCreator` function may use any of these values as needed to calculate the result.
```js
import { createAsyncThunk, createSlice } from '@reduxjs/toolkit'
const payloadCreator = async (arg, ThunkAPI): Promise<T> => { /* ... */ };
const thunk = createAsyncThunk("<action-type>", payloadCreator);
thunk.pending; // action creator that dispatches an '<action-type>/pending'
thunk.fulfilled; // action creator that dispatches an '<action-type>/fulfilled'
thunk.rejected; // action creator that dispatches an '<action-type>/rejected'
const slice = createSlice({
name: '<action-name>',
initialState,
reducers: { /* standard reducer logic, with auto-generated action types per reducer */ },
extraReducers: (builder) => {
// Add reducers for additional action types here, and handle loading state as needed
builder.addCase(thunk.fulfilled, (state, action) => { /* body of the reducer */ })
},
})
```
## RTK Query
RTK Query is provided as an optional addon within the `@reduxjs/toolkit` package.
It is purpose-built to solve the use case of data fetching and caching, supplying a compact, but powerful toolset to define an API interface layer got the app.
It is intended to simplify common cases for loading data in a web application, eliminating the need to hand-write data fetching & caching logic yourself.
RTK Query is included within the installation of the core Redux Toolkit package. It is available via either of the two entry points below:
```cs
import { createApi } from '@reduxjs/toolkit/query'
/* React-specific entry point that automatically generates hooks corresponding to the defined endpoints */
import { createApi } from '@reduxjs/toolkit/query/react'
```
RTK Query includes these APIs:
- [`createApi()`][new_api]: The core of RTK Query's functionality. It allows to define a set of endpoints describe how to retrieve data from a series of endpoints,
including configuration of how to fetch and transform that data.
- [`fetchBaseQuery()`][fetch_query]: A small wrapper around fetch that aims to simplify requests. Intended as the recommended baseQuery to be used in createApi for the majority of users.
- [`<ApiProvider />`][api_provider]: Can be used as a Provider if you do not already have a Redux store.
- [`setupListeners()`][setup_listener]: A utility used to enable refetchOnMount and refetchOnReconnect behaviors.
[new_api]: https://redux-toolkit.js.org/rtk-query/api/createApi
[fetch_query]: https://redux-toolkit.js.org/rtk-query/api/fetchBaseQuery
[api_provider]: https://redux-toolkit.js.org/rtk-query/api/ApiProvider
[setup_listener]: https://redux-toolkit.js.org/rtk-query/api/setupListeners

View file

@ -0,0 +1,207 @@
# [Svelte](https://svelte.dev/docs)
```sh
npx degit sveltejs/template <project name>
# set project to use typescript
node scripts/setupTypeScript.js
# or using vite
npm init vite@latest
```
## App Entry-point
```js
import App from "./App.svelte"; // import the component
const app = new App({
target: document.body,
props: {
// props passed to the App component
},
});
export default app;
```
## Components (`.svelte`)
### Basic Structure
```html
<!-- code for the component -->
<script lang="ts">
import { Component } from "Component.svelte";
export let prop; // make a variable a prop
</script>
<!-- CSS for the component -->
<style>
/* CSS rules */
/* target elements outside of the current component */
:global(selector) {
}
</style>
<!-- html of the component -->
<!-- dynamic expressions -->
<div>{variable}</div>
<!-- nested components -->
<Component prop="{value}" />
```
### If-Else
```js
{#if <condition>}
// markup here
{:else if <condition>}
// markup here
{:else}
// markup here
{/if}
```
### Loops
```js
{#each array as item, index} // index is optional
// markup here
{/each}
{#each array as item (key)} // use key to determine changes
// markup here
{/each}
```
### Await Blocks
```js
{#await promise}
<p>...waiting</p>
{:then number}
<p>The number is {number}</p>
{:catch error}
<p style="color: red">{error.message}</p>
{/await}
```
### Event Handling
The full list of modifiers:
- `preventDefault` — calls `event.preventDefault()` before running the handler. Useful for client-side form handling, for example.
- `stopPropagation` — calls `event.stopPropagation()`, preventing the event reaching the next element
- `passive` — improves scrolling performance on touch/wheel events (Svelte will add it automatically where it's safe to do so)
- `nonpassive` — explicitly set `passive: false`
- `capture` — fires the handler during the capture phase instead of the bubbling phase
- `once` — remove the handler after the first time it runs
- `self` — only trigger handler if `event.target` is the element itself
```js
<script>
const eventHandler = () => {};
</script>
<button on:event={eventHandler}>
// or
<button on:event={() => eventHandler(args)}>
<button on:event|modifier={eventHandler}>
```
**NOTE**: It's possible to chain modifiers together, e.g. `on:click|once|capture={...}`.
## Binding
```html
<script>
let name = "Foo";
</script>
<!-- modify value in real time -->
<input bind:value="{stringValue}" />
<input type="checkbox" bind:checked={boolean}/ >
<!-- ... -->
```
### Reactive declarations & Reactive Statements
Svelte automatically updates the DOM when the component's state changes.
Often, some parts of a component's state need to be computed from other parts and recomputed whenever they change.
For these, Svelte has reactive declarations. They look like this:
```js
let count = 0;
$: double = count * 2; // recalculated when count changes
// or
$: { }
$: <expression>
```
## Routing
[Svelte Routing](https://github.com/EmilTholin/svelte-routing)
```js
<!-- App.svelte -->
<script>
import { Router, Link, Route } from "svelte-routing";
import Home from "./routes/Home.svelte";
import About from "./routes/About.svelte";
import Blog from "./routes/Blog.svelte";
export let url = "";
</script>
<Router url="{url}">
<nav>
<Link to="/">Home</Link>
<Link to="about">About</Link>
<Link to="blog">Blog</Link>
</nav>
<div>
<Route path="blog/:id" component="{BlogPost}" />
<Route path="blog" component="{Blog}" />
<Route path="about" component="{About}" />
<Route path="/"><Home /></Route>
</div>
</Router>
```
## Data Stores
```js
// stores.js
import { writable } from "svelte/store";
export const count = writable(0);
```
```html
<script>
import { onDestroy } from "svelte";
import { count } from ".path/to/stores.js";
const unsubscriber = count.subscribe((value) => {
// do stuff on load or value change
});
count.update((n) => n + 1);
count.set(1);
// or
$count = 1;
onDestroy(unsubscriber);
</script>
<!-- use $ to reference a store value -->
<p>{$count}</p>
```

198
docs/kotlin/kotlin.md Normal file
View file

@ -0,0 +1,198 @@
# Kotlin
## Package & Imports
```kotlin
package com.app.uniqueID
import <package>
```
## Variable & Constants
```kotlin
var variable: Type //variable declaration
var variable = value //type can be omitted if it can be deduced by initialization
val CONSTANT_NAME: Type = value //constant declaration
```
### Nullable Variables
For a variable to hold a null value, it must be of a nullable type.
Nullable types are specified suffixing `?` to the variable type.
```kotlin
var nullableVariable: Type? = null
nullableVariable?.method() //correct way to use
//if var is null don't execute method() and return null
nullablevariavle!!.method() //unsafe way
//!! -> ignore that var can be null
```
## Decision Statements
### `If` - `Else If` - `Else`
```kotlin
if (condition) {
//code here
} else if (condition) {
//code here
} else {
//code here
}
```
### Conditional Expressions
```kotlin
var variable: Type = if (condition) {
//value to be assigned here
} else if (condition) {
//value to be assigned here
} else {
//value to be assigned here
}
```
### `When` Expression
Each branch in a `when` expression is represented by a condition, an arrow (`->`), and a result.
If the condition on the left-hand side of the arrow evaluates to true, then the result of the expression on the right-hand side is returned.
Note that execution does not fall through from one branch to the next.
```kotlin
when (variable){
condition -> value
condition -> value
else -> value
}
//Smart casting
when (variable){
is Type -> value
is Type -> value
}
//instead of chain of if-else
when {
condition -> value
condition -> value
else -> value
}
```
## Loops
### `For` Loop
```kotlin
for (item in iterable){
//code here
}
//loop in a numerical range
for(i in start..end) {
//code here
}
```
## Functions
```kotlin
fun functionName(parameter: Type): Type {
//code here
return <expression>
}
```
### Simplifying Function Declarations
```kotlin
fun functionName(parameter: Type): Type {
return if (condition) {
//returned value
} else {
//returned value
}
}
fun functionName(parameter: Type): Type = if (condition) {
//returned value
else {
//returned value
}
}
```
### Anonymous Functions
```kotlin
val anonymousFunction: (Type) -> Type = { input ->
//code acting on input here
}
val variableName: Type = anonymousFunction(input)
```
### Higher-order Functions
A function can take another function as an argument. Functions that use other functions as arguments are called *higher-order* functions.
This pattern is useful for communicating between components in the same way that you might use a callback interface in Java.
```kotlin
fun functionName(parameter: Type, function: (Type) -> Type): Type {
//invoke function
return function(parameter)
}
```
## Object Oriented Programming
### Class
```kotlin
//primary constructor
class ClassName(private var attribute: Type) {
}
class ClassName {
private var var1: Type
//secondary constructor
constructor(parameter: Type) {
this.var1 = parameter
}
}
```
### Companion Object
[Companion Object Docs](https://kotlinlang.org/docs/tutorials/kotlin-for-py/objects-and-companion-objects.html)
```kotlin
class ClassName {
// in java: static
companion object {
// static components of the class
}
}
```
## Collections
### ArrayList
```kotlin
var array:ArrayList<Type>? = null // List init
array.add(item) //add item to list
```

79
docs/markdown.md Normal file
View file

@ -0,0 +1,79 @@
# Markdown Notes
## Headings
```markdown
Heading 1
=========
Heading 2
---------
# Heading 1
## Heading 2
### Heading 3
```
## Text Formatting
```markdown
*Italic* _Italic_
**Bold** __Bold__
~GitHub's strike-trough~
```
## Links & Images
```markdown
[link text](http://b.org "title")
[link text][anchor]
[anchor]: http://b.org "title"
![alt attribute](http://url/b.jpg "title")
![alt attribute][anchor]
[anchor]: http://url/b.jpg "title"
```
```markdown
> Blockquote
* unordered list - unordered list
* unordered list - unordered list
* unordered list - unordered list
1) ordered list 1. ordered list
2) ordered list 2. ordered list
3) ordered list 3. ordered list
- [ ] empty checkbox
- [x] checked checkbox
```
### Horizontal rule
```markdown
--- ***
```
## Code
```markdown
`inline code`
```lang
multi-line
code block
```
```
## Table
```markdown
| column label | column label | column label |
|:-------------|:------------:|--------------:|
| left-aligned | centered | right-aligned |
| row contents | row contents | row contents |
```

109
docs/php/composer.md Normal file
View file

@ -0,0 +1,109 @@
# Composer & Autoloading
## Autoloading
The function [spl_autoload_register()](https://www.php.net/manual/en/function.spl-autoload-register.php) allows to register a function that will be invoked when PHP needs to load a class/interface defined by the user.
In `autoload.php`:
```php
# custom function
function autoloader($class) {
# __DIR__ -> path of calling file
# $class -> className (hopefully same as file)
# if class is in namespace $class -> Namespace\className (hopefully folders mirror Namespace)
$class = str_replace("\\", DIRECTORY_SEPARATOR, $class); # avoid linux path separator issues
$fileName = sprintf("%s\\path\\%s.php", __DIR__, $class);
# or
$filename = sprintf("%s\\%s.php", __DIR__, $class); # if class is in namespace
if (file_exists($fileName)) {
include $fileName;
}
}
spl_autoload_register('autoloader'); // register function
```
In `file.php`:
```php
require "autoload.php";
# other code
```
**NOTE**: will fuck up if namespaces exists.
### Multiple Autoloading
It's possible to resister multiple autoloading functions by calling `spl_autoload_register()` multiple times.
```php
# prepend adds the function at the start of the queue
# throws selects if errors in loading throw exceptions
spl_autoload_register(callable $func, $throw=TRUE, $prepend=FALSE);
spl_autoload_functions() # return a list of registered functions.
```
## [Composer](https://getcomposer.org/)
Open Source project for dependency management and autoloading of PHP libraries and classes.
Composer uses `composer.json` to define dependencies with third-party libraries.
Libraries are downloaded through [Packagist](https://packagist.org/) and [GitHub](https://github.com/).
In `composer.json`:
```json
{
"require": {
"php": ">=7.0",
"monolog/monolog": "1.0.*"
}
}
```
### Installing Dependencies
In the same folder of `composer.json` execute `composer install`.
Composer will create:
- `vendor`: folder containing all requested libraries
- `vendor\autoload.php`: file for class autoloading
- `composer.lock`
In alternative `composer require <lib>` will add the library to the project and create a `composer.json` if missing.
**NOTE**: to ignore the php version use `composer <command> --ignore-platform-reqs`
### Updating Dependencies
To update dependencies use `composer update`. To update only the autoloading section use `composer dump-autoload`.
### [Autoloading Project Classes](https://getcomposer.org/doc/04-schema.md#autoload)
[PSR-4 Spec](https://www.php-fig.org/psr/psr-4/)
Composer can also autoload classes belonging to the current project. It's sufficient to add the `autoload` keyword in the JSON and specify the path and autoload mode.
```json
{
"autoload": {
"psr-4": {
"RootNamespace": "src/",
"Namespace\\": "src/Namespace/",
},
"file": [
"path/to/file.php",
...
]
}
}
```

102
docs/php/database.md Normal file
View file

@ -0,0 +1,102 @@
# Databases in PHP
## PHP Data Objects ([PDO][pdo])
[pdo]: https://www.php.net/manual/en/book.pdo.php
PDO is the PHP extension for database access through a single API. It supports various databases: MySQL, SQLite, PostgreSQL, Oracle, SQL Server, etc.
### Database Connection
```php
$dsn = "mysql:dbname=<dbname>;host=<ip>";
$user="<db_user>";
$password="<db_password>";
try {
$pdo = new PDO($dsn, $user, $password); # connect, can throw PDOException
} catch (PDOException $e) {
printf("Connection failed: %s\n", $e->getMessage()); # notify error
exit(1);
}
```
### Queries
To execute a query it's necessary to "prepare it" with *parameters*.
```php
# literal string with markers
$sql = 'SELECT fields
FROM tables
WHERE field <operator> :marker'
$stmt = $pdo->prepare($sql, $options_array); # returns PDOStatement, used to execute the query
$stmt->execute([ ':marker' => value ]); # substitute marker with actual value
# fetchAll returns all matches
$result = $stmt->fetchAll(); # result as associative array AND numeric array (PDO::FETCH_BOTH)
$result = $stmt->fetchAll(PDO::FETCH_ASSOC); # result as associative array
$result = $stmt->fetchAll(PDO::FETCH_NUM); # result as array
$result = $stmt->fetchAll(PDO::FETCH_OBJ); # result as object of stdClass
$result = $stmt->fetchAll(PDO::FETCH_CLASS, ClassName::class); # result as object of a specific class
```
### Parameter Binding
```php
# bindValue
$stmt = pdo->prepare(sql);
$stmt->bindValue(':marker', value, PDO::PARAM_<type>);
$stmt->execute();
# bindParam
$stmt = pdo->prepare(sql);
$variable = value;
$stmt->bindParam(':marker', $variable); # type optional
$stmt->execute();
```
### PDO & Data Types
By default PDO converts all results into strings since it is a generic driver for multiple databases.
Its possible to disable this behaviour setting `PDO::ATTR_STRINGIFY_FETCHES` and `PDO::ATTR_EMULATE_PREPARES` as `false`.
**NOTE**: `FETCH_OBJ` abd `FETCH_CLASS` return classes and don't have this behaviour.
```php
pdo->setAttribute()
$pdo->setAttribute(PDO::ATTR_STRINGIFY_FETCHES, false);
$pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$stmt = $pdo->prepare($sql);
$stmt->execute([':marker' => value]);
$result = $stmt->fetchAll(PDO::FETCH_ASSOC);
```
### PDO Debug
```php
$stmt = $pdo->prepare($sql);
$stmt->execute([':marker' => value]);
$result = $stmt->fetchAll(PDO::FETCH_ASSOC);
$stmt->debugDumpParams(); # print the SQL query that has been sent to the database
```
## [SQLite3](https://www.php.net/manual/en/book.sqlite3.php)
```php
$db = SQLite3("db_file.sqlite3"); // connection
$stmt = $db->prepare("SELECT fields FROM tables WHERE field <operator> :marker"); // prepare query
$stmt->bindParam(":marker", $variable); // param binding
$result = $stmt->execute(); // retrieve records
$result->finalize(); // close result set, recommended calling before another execute()
$records = $results->fetchArray(SQLITE3_ASSOC); // extract records as array (false if no results)
```
**NOTE**: Result set objects retrieved by calling `SQLite3Stmt::execute()` on the same statement object are not independent, but rather share the same underlying structure. Therefore it is recommended to call `SQLite3Result::finalize()`, before calling `SQLite3Stmt::execute()` on the same statement object again.

View file

@ -0,0 +1,78 @@
# Dependency Injection
Explicit definition of a class dependencies with the injection through the constructor or *getters*/*setters*.
```php
class Foo
{
public function __construct(PDO $pdo) // depends on PDO
{
$this->pdo = $pdo;
}
}
```
## Dependency Injection Container
The **Dependency Injection Container** (DIC) allow to archive all the dependencies in a single `Container` class. Some offer automatic resolution of the dependencies.
## [PHP-DI](https://php-di.org/)
The dependency injection container for humans. Installation: `composer require php-di/php-di`
- **Autowire** functionality: the ability of the container to create and inject the dependency automatically.
- Use of [Reflection](https://www.php.net/manual/en/intro.reflection.php)
- Configuration of the container through annotations & PHP code.
```php
class Foo
{
private $bar;
public function __construct(Bar $bar) // depends on Bar
{
$this->bar = $bar;
}
}
class Bar{}
$container = new DI\Container(); // DI Container
$foo = $container->get('Foo'); // get instance of Foo (automatic DI of Bar)
```
### DIC Configuration
```php
// Foo.php
class Foo
{
public function __construct(PDO $pdo) // depends on PDO
{
$this->pdo = $pdo;
}
}
```
```php
// config.php
use Psr\Container\ContainerInterface;
// config "primitive" dependencies (dependency => construct & return)
return [
'dsn' => 'sqlite:db.sq3',
PDO::class => function(ContainerInterface $c) {
return new PDO($c->get('dsn'));
},
...
];
```
```php
$builder = new \DI\ContainerBuilder();
$builder->addDefinitions("config.php"); // load config
$container = $builder->build(); // construct container
$cart = $container->get(Foo::class); // Instantiate & Inject
```
**NOTE**: `get("className")` requires the explicit definition of `className` in the config file. `get(ClassName::class)` does not.

925
docs/php/php.md Normal file
View file

@ -0,0 +1,925 @@
# PHP
[PHP Docs](https://www.php.net/docs.php)
```php
declare(strict_types=1); # activates variable type checking on function arguments
# single line comment
//single line comment
/* multi line comment */
```
## Include, Require
```php
include "path\\file.php"; # import an external php file, E_WARNING if fails
include_once "path\\file.php"; # imports only if not already loaded
require "path\\file.php"; # import an external php file, E_COMPILE_ERROR if fails
require_once "path\\file.php"; # imports only if not already loaded
```
### Import configs from a file with `include`
In `config.php`:
```php
//config.php
//store configuration options in associative array
return [
setting => value,
setting = value,
]
```
```php
$config = include "config.php"; // retrieve config and store into variable
```
## Namespace
[PSR-4 Spec](https://www.php-fig.org/psr/psr-4/)
```php
namespace Foo\Bar\Baz; # set namespace for all file contents, \ for nested namespaces
use <PHP_Class> # using a namespace hides standard php classes (WHY?!?)
# namespace for only a block of code
namespace Foo\Bar\Baz {
function func() {
# coded here
}
};
Foo\Bar\Baz\func(); # use function from Foo\Bar\Baz without USE instruction
use Foo\Bar\Baz\func; # import namespace
func(); # use function from Foo\Bar\Baz
use Foo\Bar\Baz\func as f; # use function with an alias
f(); # use function from Foo\Bar\Baz
use Foo\Bar\Baz as fbb # use namespace with alias
fnn\func(); # use function from Foo\Bar\Baz
```
## Basics
```php
declare(strict_types=1); # activates type checking
# single line comment
//single line comment
/* multi line comment */
```
### Screen Output
```php
echo "string"; # string output
echo 'string\n'; # raw string output
printf("format", $variables); # formatted output of strings and variables
sprintf("format", $variables); # return formatted string
```
### User Input
```php
$var = readline("prompt");
# if readline is not installed
if (!function_exists('readline')) {
function readline($prompt)
{
$fh = fopen('php://stdin', 'r');
echo $prompt;
$userInput = trim(fgets($fh));
fclose($fh);
return $userInput;
}
}
```
## Variables
```php
$variableName = value; # weakly typed
echo gettype(&variable); # output type of variable
var_dump($var); # prints info of variable (bit dimension, type & value)
```
### Integers
```php
&max = PHP_INT_MAX; # max value for int type -> 9223372036854775807
&min = PHP_INT_MIN; # min value for int type -> -9223372036854775808
&bytes = PHP_INT_SIZE; # bytes for int type -> 8
&num = 255; # decimal
&num = 0b11111111; # binary
&num = 0377; # octal
&num = 0xff; # hexadecimal
```
### Double
```php
$a = 1.234; // 1.234
$b = 1.2e3; // 1200
$c = 7E-10; // 0.0000000007
```
### Mathematical Operators
| Operator | Operation |
| -------- | -------------- |
| `-` | Subtraction |
| `*` | Multiplication |
| `/` | Division |
| `%` | Modulo |
| `**` | Exponentiation |
| `var++` | Post Increment |
| `++var` | Pre Increment |
| `var--` | Post Decrement |
| `--var` | Pre Decrement |
### Mathematical Functions
- `sqrt($x)`
- `sin($x)`
- `cos($x)`
- `log($x)`
- `round($x)`
- `floor($x)`
- `ceil($x)`
## Strings
A string is a sequence of ASCII characters. In PHP a string is an array of characters.
### String Concatenation
```php
$string1 . $string2; # method 1
$string1 .= $string2; # method 2
```
### String Functions
```php
strlen($string); # returns the string length
strpos($string, 'substring'); # position of substring in string
substr($string, start, len); # extract substring of len from position start
strtoupper($string); # transform to uppercase
strtolower($string); # transform to lowercase
explode(delimiter, string); # return array of substrings
$var = sprintf("format", $variables) # construct and return a formatted string
```
## Constants
```php
define ('CONSTANT_NAME', 'value')
```
### Magic Constants `__NAME__`
- `__FILE__`: script path + script filename
- `__DIR__`: file directory
- `__LINE__`: current line number
- `__FUNCTION__`: the function name, or {closure} for anonymous functions.
- `__CLASS__`: name of the class
- `__NAMESPACE__`: the name of the current namespace.
## Array
Heterogeneous sequence of values.
```php
$array = (sequence_of_items); # array declaration and valorization
$array = [sequence_of_items]; # array declaration and valorization
# index < 0 selects items starting from the last
$array[index]; # access to the items
$array[index] = value; # array valorization (can add items)
$array[] = value; # value appending
```
### Multi Dimensional Array (Matrix)
Array of array. Can have arbitrary number of nested array
```php
$matrix = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
];
```
### Array Printing
Single instruction to print whole array is ``
```php
$array = [1, 2, 3];
print_r($array); # print all the array values
```
### Array Functions
```php
count($array); # returns number of items in the array
array_sum($array) # sum of the array value
sort($array); # quick sort
in_array($item, $array); // check if item is in the array
array_push($array, $item); // append item to the array
unset($array[index]); # item (or variable) deletion
# array navigation
current();
key();
next();
prev();
reset();
end();
# sorting
sort($array, $sort_flags="SORT_REGULAR");
array_values($array); # regenerates the array fixing the indexes
list($array1 [, $array2, ...]) = $data; # Python-like tuple unpacking
```
### Associative Arrays
Associative arrays have a value as an index. Alternative names are _hash tables_ or _dictionaries_.
```php
$italianDay = [
'Mon' => 'Lunedì',
'Tue' => 'Martedì',
'Wed' => 'Mercoledì',
'Thu' => 'Giovedì',
'Fri' => 'Venerdì',
'Sat' => 'Sabato',
'Sun' => 'Domenica'
];
$italianDay["Mon"]; # evaluates to Lunedì
```
## Conditional Instructions
### Conditional Operators
| Operator | Operation |
| ----------- | ------------------------ |
| $a `==` $b | value equality |
| $a `===` $b | value & type equality |
| $a `!=` $b | value inequality |
| $a `<>` $b | value inequality |
| $a `!==` $b | value or type inequality |
| $a `<` $b | less than |
| $a `>` $b | greater than |
| $a `<=` $b | less or equal to |
| $a `>=` $b | greater or equal to |
| $a `<=>` $b | spaceship operator |
With `==` a string evaluates to `0`.
### Logical Operators
| Operator | Example | Result |
| -------- | ----------- | ---------------------------------------------------- |
| `and` | `$a and $b` | TRUE if both `$a` and `$b` are TRUE. |
| `or` | `$a or $b` | TRUE if either `$a` or `$b` is TRUE. |
| `xor` | `$a xor $b` | TRUE if either `$a` or `$b` is TRUE, but _not both_. |
| `not` | `!$a` | TRUE if `$a` is _not_ TRUE. |
| `and` | `$a && $b` | TRUE if both `$a` and `$b` are TRUE. |
| `or` | `$a || $b` | TRUE if either `$a` or `$b` is TRUE. |
### Ternary Operator
```php
condition ? result_if_true : result_if_false;
condition ?: result_if_false;
```
### NULL Coalesce
```php
$var1 = $var2 ?? value; # if variable == NULL assign value, otherwise return value of $var2
# equivalent to
$var1 = isset($var2) ? $var2 : value
```
### Spaceship Operator
```php
$a <=> $b;
# equivalent to
if $a > $b
return 1;
if $a == $b
return 0;
if $a < $b
return -1;
```
### `If` - `Elseif` - `Else`
```php
if (condition) {
# code here
} elseif (condition) {
# code here
} else {
# code here
}
if (condition) :
# code here
elseif (condition):
# code here
else:
# code here
endif;
```
### Switch Case
```php
# weak comparison
switch ($var) {
case value:
# code here
break;
default:
# code here
}
# strong comparison
switch (true) {
case $var === value:
# code here
break;
default:
# code here
}
```
### Match Expression (PHP 8)
`match` can return values, doesn't require break statements, can combine conditions, uses strict type comparisons and doesn't do any type coercion.
```php
$result = match($input) {
0 => "hello",
'1', '2', '3' => "world",
};
```
## Loops
### For, Foreach
```php
for (init, condition, increment){
# code here
}
for (init, condition, increment):
# code here
endfor;
foreach($sequence as $item) {
# code here
}
foreach($sequence as $item):
# code here
endforeach;
# foreach on dicts
foreach($sequence as $key => $value) {
# code here
}
```
### While, Do-While
```php
while (condition) {
# code here
}
while (condition):
# code here
endwhile;
do {
# code here
} while (condition);
```
### Break, Continue, exit()
`break` stops the iteration.
`continue` skips one cycle of the iteration.
`exit()` terminates the execution of any PHP code.
## Functions
[Function Docstring](https://make.wordpress.org/core/handbook/best-practices/inline-documentation-standards/php/)
Parameters with default values are optional in the function call and must be the last ones in the function declaration. Return type is optional if type checking is disabled.
```php
declare(strict_types=1); # activates type checking
/**
* Summary.
*
* Description.
*
* @since x.x.x
*
* @see Function/method/class relied on
* @link URL
* @global type $varname Description.
* @global type $varname Description.
*
* @param type $var Description.
* @param type $var Optional. Description. Default.
* @return type Description.
*/
function functionName (type $parameter, $parameter = default_value): Type
{
# code here
return <expression>;
}
```
### Void function
```php
function functionName (type $parameter, $parameter = default_value): Void
{
# code here
}
```
### Passing a parameter by reference (`&$`)
```php
function functionName (type &$parameter): Type
{
# code here
return <expression>;
}
```
### Variable number of parameters, variadic operator (`...`)
```php
function functionName (type $parameter, ...$args): Type
function functionName (type $parameter, type ...$args): Type
{
# code here
return <expression>;
}
```
### Nullable parameters
```php
function functionName (?type $parameter): ?Type
{
# code here
return <expression>;
}
```
## Anonymous Functions (Closure)
```php
# declaration and assignment to variable
$var = function (type $parameter) {
# code here
};
$var($arg);
```
### Use Operator
```php
# use imports a variable into the closure
$foo = function (type $parameter) use ($average) {
# code here
}
```
### Union Types (PHP 8)
**Union types** are a collection of two or more types which indicate that _either_ one of those _can be used_.
```php
public function foo(Foo|Bar $input): int|float;
```
### Named Arguments (PHP 8)
Named arguments allow to pass in values to a function, by specifying the value name, to avoid taking their order into consideration.
It's also possible to skip optional parameters.
```php
function foo(string $a, string $b, ?string $c = null, ?string $d = null) { /* … */ }
foo(
b: 'value b',
a: 'value a',
d: 'value d',
);
```
## Object Oriented Programming
### Scope & Visibility
`public` methods and attributes are visible to anyone (_default_).
`private` methods and attributes are visible only inside the class in which are declared.
`protected` methods and attributes are visible only to child classes.
`final` classes cannot be extended.
### Class Declaration & Instantiation
```php
# case insensitive
class ClassName
{
const CONSTANT = value; # public by default
public $attribute; # null by default if not assigned
public Type $attribute; # specifying the type is optional, it will be enforced if present
# class constructor
public function __construct(value)
{
$this->attribute = value
}
public getAttribute(): Type
{
return $this->attribute;
}
public function func(): Type
{
# code here
}
}
$object = new ClassName; # case insensitive (CLASSNAME, ClassName, classname)
$object->attribute = value;
$object->func();
$object::CONSTANT;
$var = $object; # copy by reference
$var = clone $object # copy by value
$object instanceof ClassName // check type of the object
```
### Static classes, attributes & methods
Inside static methods it's impossible to use `$this`.
A static variable is unique for the class and all instances.
```php
class ClassName {
public static $var;
public static function func(){
//code here
}
public static function other_func(){
//code here
self::func();
}
}
ClassName::func(); // use static function
$obj = new ClassName();
$obj::$var; // access to the static variable
```
### [Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection)
Parameters of the dependency can be modified before passing the required class to the constructor.
```php
class ClassName
{
private $dependency;
public function __construct(ClassName requiredClass)
{
$this->dependency = requiredClass; # necessary class is passed to the constructor
}
}
```
### Inheritance
If a class is defined `final` it can't be extended.
If a function is declared `final` it can't be overridden.
```php
class Child extends Parent
{
public __construct() {
parent::__construct(); # call parent's method
}
}
```
### Abstract Class
Abstract classes cannot be instantiated;
```php
abstract class ClassName
{
# code here
}
```
### Interface
An interface is a "contract" that defines what methods the implementing classes **must** have and implement.
A class can implement multiple interfaces but there must be no methods in common between interface to avoid ambiguity.
```php
interface InterfaceName {
// it is possible to define __construct
// function has no body; must be implements in the class that uses the interface
public function functionName (parameters) : Type; // MUST be public
}
class ClassName implements InterfaceName {
public function functionName(parameters) : Type {
//implementation here
}
}
```
### Traits
`Traits` allows the reutilization of code inside different classes without links of inheritance.
It can be used to mitigate the problem of _multiple inheritance_, which is absent in PHP.
In case of functions name conflict it's possible to use `insteadof` to specify which function to use. It's also possible to use an _alias_ to resolve the conflicts.
```php
trait TraitName {
// code here
}
class ClassName {
use TraitName, {TraitName::func() insteadof OtherTrait}, { func() as alias }; # can use multiple traits
# code here
}
```
### Anonymous Classes
```php
$obj = new ClassName;
$obj->method(new class implements Interface {
public function InterfaceFunc() {
// code here
}
});
```
## Serialization & JSON
```php
$serialized = serialize($obj); # serialization
$obj = unserialize($serialized); # de-serialization
$var = json_decode(string $json, bool $associative); # Takes a JSON encoded string and converts it into a PHP variable.ù
$json = json_encode($value); # Returns a string containing the JSON representation of the supplied value.
```
## Files
### Read/Write on Files
```php
file(filename); // return file lines in an array
// problematic with large files (allocates memory to read all file, can fill RAM)
file_put_contents(filename, data); // write whole file
file_get_contents(filename); // read whole file
```
## Regular Expressions
```php
preg_match('/PATTERN/', string $subject, array $matches); # returns 1 if the pattern matches given subject, 0 if it does not, or FALSE if an error occurred
# $matches[0] = whole matched string
# $matches[i] = i-th group of the regex
```
## Hashing
Supported hashing algrithms:
- `md2`, `md4`, `md5`
- `sha1`, `sha224`, `sha256`, `sha384`, `sha512/224`, `sha512/256`, `sha512`
- `sha3-224`, `sha3-256`, `sha3-384`, `sha3-512`
- `ripemd128`, `ripemd160`, `ripemd256`, `ripemd320`
- `whirlpool`
- `tiger128,3`, `tiger160,3`, `tiger192,3`, `tiger128,4`, `tiger160,4`, `tiger192,4`
- `snefru`, `snefru256`
- `gost`, `gost-crypto`
- `adler32`
- `crc32`, `crc32b`, `crc32c`
- `fnv132`, `fnv1a32`, `fnv164`, `fnv1a64`
- `joaat`
- `haval128,3`, `haval160,3`, `haval192,3`, `haval224,3`, `haval256,3`, `haval128,4`, `haval160,4`, `haval192,4`, `haval224,4`, `haval256,4`, `haval128,5`, `haval160,5`, `haval192,5`, `haval224,5`, `haval256,5`
```php
hash($algorithm, $data);
```
### Password Hashes
`password_hash()` is compatible with `crypt()`. Therefore, password hashes created by `crypt()` can be used with `password_hash()`.
Algorithms currently supported:
- **PASSWORD_DEFAULT** - Use the _bcrypt_ algorithm (default as of PHP 5.5.0). Note that this constant is designed to change over time as new and stronger algorithms are added to PHP.
- **PASSWORD_BCRYPT** - Use the **CRYPT_BLOWFISH** algorithm to create the hash. This will produce a standard `crypt()` compatible hash using the "$2y$" identifier. The result will always be a 60 character string, or FALSE on failure.
- **PASSWORD_ARGON2I** - Use the **Argon2i** hashing algorithm to create the hash. This algorithm is only available if PHP has been compiled with Argon2 support.
- **PASSWORD_ARGON2ID** - Use the **Argon2id** hashing algorithm to create the hash. This algorithm is only available if PHP has been compiled with Argon2 support.
**Supported options for PASSWORD_BCRYPT**:
- **salt** (string) - to manually provide a salt to use when hashing the password. Note that this will override and prevent a salt from being automatically generated.
If omitted, a random salt will be generated by password_hash() for each password hashed. This is the intended mode of operation.
**Warning**: The salt option has been deprecated as of PHP 7.0.0. It is now preferred to simply use the salt that is generated by default.
- **cost** (integer) - which denotes the algorithmic cost that should be used. Examples of these values can be found on the crypt() page.
If omitted, a default value of 10 will be used. This is a good baseline cost, but you may want to consider increasing it depending on your hardware.
**Supported options for PASSWORD_ARGON2I and PASSWORD_ARGON2ID**:
- **memory_cost** (integer) - Maximum memory (in kibibytes) that may be used to compute the Argon2 hash. Defaults to PASSWORD_ARGON2_DEFAULT_MEMORY_COST.
- **time_cost** (integer) - Maximum amount of time it may take to compute the Argon2 hash. Defaults to PASSWORD_ARGON2_DEFAULT_TIME_COST.
- **threads** (integer) - Number of threads to use for computing the Argon2 hash. Defaults to PASSWORD_ARGON2_DEFAULT_THREADS.
```php
password_hash($password, $algorithm); # create a new password hash using a strong one-way hashing algorithm.
password_verify($password, $hash); # Verifies that a password matches a hash
```
## Errors
Types of PHP errors:
- **Fatal Error**: stop the execution of the program.
- **Warning**: generated at runtime, does not stop the execution (non-blocking).
- **Notice**: informative errors or messages, non-blocking.
```php
$a = new StdClass()
$a->foo(); // PHP Fatal Error: foo() does not exist
```
```php
$a = 0;
echo 1/$a; // PHP Warning: Division by zero
```
```php
echo $a; // PHP Notice: $a undefined
```
### Error Reporting
[PHP Error Constants](https://www.php.net/manual/en/errorfunc.constants.php)
Its possible to configure PHP to signal only some type of errors. Errors below a certain levels are ignored.
```php
error_reporting(E_<type>); // set error report threshold (for log file)
// does not disable PARSER ERROR
ini_set("display_errors", 0); // don't display any errors on stderr
ini_set("error_log", "path\\error.log"); // set log file
```
### Triggering Errors
```php
// generate E_USER_ errors
trigger_error("message"); // default type: E_USER_NOTICE
trigger_error("message", E_USER_<Type>);
trigger_error("Deprecated Function", E_USER_DEPRECATED);
```
### [Writing in the Log File](https://www.php.net/manual/en/function.error-log.php)
It's possible to use log files unrelated to the php log file.
```php
error_log("message", 3, "path\\log.log"); // write log message to a specified file
//example
error_log(sprintf("[%s] Error: _", date("Y-m-d h:i:s")), 3, "path\\log.log")
```
## Exception Handling
PHP offers the possibility to handle errors with the _exception model_.
```php
try {
// dangerous code
} catch(ExceptionType1 | ExceptionType2 $e) {
printf("Errore: %s", $e->getMessage());
} catch(Exception $e) {
// handle or report exception
}
throw new ExceptionType("message"); // throw an exception
```
All exceptions in PHP implement the interface `Throwable`.
```php
Interface Throwable {
abstract public string getMessage ( void )
abstract public int getCode ( void )
abstract public string getFile ( void )
abstract public int getLine ( void )
abstract public array getTrace ( void )
abstract public string getTraceAsString ( void )
abstract public Throwable getPrevious ( void )
abstract public string __toString ( void )
}
```
### Custom Exceptions
```php
/**
* Define a custom exception class
*/
class CustomException extends Exception
{
// Redefine the exception so message isn't optional
public function __construct($message, $code = 0, Exception $previous = null) {
// some code
// make sure everything is assigned properly
parent::__construct($message, $code, $previous);
}
// custom string representation of object
public function __toString() {
return __CLASS__ . ": [{$this->code}]: {$this->message}\n";
}
public function customFunction() {
echo "A custom function for this type of exception\n";
}
}
```

View file

@ -0,0 +1,146 @@
# Templates with Plates
## Template
To separate HTML code and PHP code it's possible to use **templates** with markers for variable substitution.
Variables are created in the PHP code and are passed to the template in the **rendering** phase.
The server transmits only the `index.php` file to the user. The php file renders the templates as needed.
```html
<html>
<head>
<title><?= $this->e($title)?></title>
</head>
<body>
<?= $this->section('content')?>
</body>
</html>
```
## [Plates](https://platesphp.com/)
Plates is a template engine for PHP. A template engine permits to separate the PHP code (business logic) from the HTML pages.
Installation through composer: `composer require league/plates`.
```php
# index.php
require "vendor/autoload.php";
use League\Plates\Engine;
$templates = new Engine("path\\to\\templates");
echo $templates->render("template_name", [
"key_1" => "value_1",
"key_2" => "value_2"
]);
```
```php
# template.php
<html>
<head>
<title><?= $key_1?></title>
</head>
<body>
<h1>Hello <?= $key_2 ?></h1>
</body>
</html>
```
Variables in the template are created through an associative array `key => value`. The key (`"key"`) becomes a variable (`$key`) in the template.
### Layout
It's possible to create a layout (a model) for a group of pages to make that identical save for the contents.
In a layout it's possible to create a section called **content** that identifies content that can be specified at runtime.
**NOTE**: Since only the template has the data passed eventual loops have to be done there.
```php
# index.php
require 'vendor/autoload.php';
use League\Plates\Engine;
$template = new Engine('/path/to/templates');
echo $template->render('template_name', [ "marker" => value, ... ]);
```
```php
# template.php
# set the layout used for this template
<?php $this->layout("layout_name", ["marker" => value, ...]) ?> # pass values to the layout
# section contents
<p> <?= $this->e($marker) ?> </p>
```
```php
# layout.php
<html>
<head>
<title><?= $marker ?></title>
</head>
<body>
<?= $this->section('content')?> # insert the section
</body>
</html>
```
### Escape
It's necessary to verify that the output is in the necessary format.
Plates offers `$this->escape()` or `$this->e()` to validate the output.
In general the output validation allows to prevent [Cross-Site Scripting][owasp-xss] (XSS).
[owasp-xss]: https://owasp.org/www-community/attacks/xss/
### Folders
```php
# index.php
$templates->addFolder("alias", "path/to/template/folder"); # add a template folder
echo $templates->render("folder::template"); # use a template in a specific folder
```
### Insert
It's possible to inject templates in a layout or template. It is done by using the `insert()` function.
```php
# layout.php
<html>
<head>
<title><?=$this->e($title)?></title>
</head>
<body>
<?php $this->insert('template::header') ?> # insert template
<?= $this->section('content')?> # page contents
<?php $this->insert('template::footer') ?> # insert template
</body>
</html>
```
### Sections
It's possible to insert page contest from another template with the `section()` function.
The content to be inserted must be surrounded with by the `start()` and `stop()` functions.
```php
# template.php
<?php $this->start("section_name") ?> # start section
# section contents (HTML)
<?php $this->stop() ?> # stop section
# append to section is existing, create if not
<?php $this->push("section_name") ?>
# section contents (HTML)
<?php $this->end() ?>
```

34
docs/php/psr-7.md Normal file
View file

@ -0,0 +1,34 @@
# PSR-7
## [PSR-7](https://www.php-fig.org/psr/psr-7/)
Standard of the PHP Framework Interop Group that defines common interfaces for handling HTTP messages.
- `Psr\Http\Message\MessageInterface`
- `Psr\Http\Message\RequestInterface`
- `Psr\Http\Message\ResponseInterface`
- `Psr\Http\Message\ServerRequestInterface`
- `Psr\Http\Message\StreamInterface`
- `Psr\Http\Message\UploadedFileInterface`
- `Psr\Http\Message\UriInterface`
Example:
```php
// empty array if not found
$header = $request->getHeader('Accept');
// empty string if not found
$header = $request->getHeaderLine('Accept');
// check the presence of a header
if (! $request->hasHeader('Accept')) {}
// returns the parameters in a query string
$query = $request->getQueryParams();
```
### Immutability
PSR-7 requests are *immutable* objects; a change in the data will return a new instance of the object.
The stream objects of PSR-7 are *not immutable*.

View file

@ -0,0 +1,97 @@
# REST API with Simple-MVC
## Routing (Example)
```php
// config/route.php
return [
[ 'GET', '/api/user[/{id}]', Controller\User::class ],
[ 'POST', '/api/user', Controller\User::class ],
[ 'PATCH', '/api/user/{id}', Controller\User::class ],
[ 'DELETE', '/api/user/{id}', Controller\User::class ]
];
```
## Controller (Example)
```php
public class UserController implements ControllerInterface
{
public function __construct(UserModel $user)
{
$this->userModel = $user;
// Set the Content-type for all the HTTP methods
header('Content-type: application/json');
}
// method dispatcher
public function execute(ServerRequestInterface $request)
{
$method = strtolower($request->getMethod());
if (!method_exists($this, $method)) {
http_response_code(405); // method not exists
return;
}
$this->$method($request);
}
public function get(ServerRequestInterface $request)
{
$id = $request->getAttribute('id');
try {
$result = empty($id)
? $this->userModel->getAllUsers()
: $this->userModel->getUser($id);
} catch (UserNotFoundException $e) {
http_response_code(404); // user not found
$result = ['error' => $e->getMessage()];
}
echo json_encode($result);
}
public function post(ServerRequestInterface $request)
{
$data = json_decode($request->getBody()->getContents(), true);
try {
$result = $this->userModel->addUser($data);
} catch (InvalidAttributeException $e) {
http_response_code(400); // bad request
$result = ['error' => $e->getMessage()];
} catch (UserAlreadyExistsException $e) {
http_response_code(409); // conflict, the user is not present
$result = ['error' => $e->getMessage()];
}
echo json_encode($result);
}
public function patch(ServerRequestInterface $request)
{
$id = $request->getAttribute('id');
$data = json_decode($request->getBody()->getContents(), true);
try {
$result = $this->userModel->updateUser($data, $id);
} catch (InvalidAttributeException $e) {
http_response_code(400); // bad request
$result = ['error' => $e->getMessage()];
} catch (UserNotFoundException $e) {
http_response_code(404); // user not found
$result = ['error' => $e->getMessage()];
}
echo json_encode($result);
}
public function delete(ServerRequestInterface $request)
{
$id = $request->getAttribute('id');
try {
$this->userModel->deleteUser($id);
$result = ['result' => 'success'];
} catch (UserNotFoundException $e) {
http_response_code(404); // user not found
$result = ['error' => $e->getMessage()];
}
echo json_encode($result);
}
}
```

View file

@ -0,0 +1,146 @@
# [SimpleMVC](https://github.com/ezimuel/simplemvc) Mini-Framework
SimpleMVC is a micro MVC framework for PHP using [FastRoute][fastroute], [PHP-DI][php-di], [Plates][plates] and [PHP-DI][php-di] standard for HTTP messages.
This framework is mainly used as tutorial for introducing the Model-View-Controller architecture in modern PHP applications.
[php-di]: https://php-di.org/
[fastroute]: https://github.com/nikic/FastRoute
[psr7]:https://github.com/Nyholm/psr7
[plates]: https://platesphp.com/
## Installation
```ps1
composer create-project ezimuel/simple-mvc
```
## Structure
```txt
|- config
| |- container.php --> DI Container Config (PHP-DI)
| |- route.php --> routing
|- public
| |- img
| |- index.php --> app entry-point
|- src
| |- Model
| |- View --> Plates views
| |- Controller --> ControllerInterface.php
|- test
| |- Model
| |- Controller
```
### `index.php`
```php
<?php
declare(strict_types=1);
chdir(dirname(__DIR__));
require 'vendor/autoload.php';
use DI\ContainerBuilder;
use FastRoute\Dispatcher;
use FastRoute\RouteCollector;
use Nyholm\Psr7\Factory\Psr17Factory;
use Nyholm\Psr7Server\ServerRequestCreator;
use SimpleMVC\Controller\Error404;
use SimpleMVC\Controller\Error405;
$builder = new ContainerBuilder();
$builder->addDefinitions('config/container.php');
$container = $builder->build();
// Routing
$dispatcher = FastRoute\simpleDispatcher(function(RouteCollector $r) {
$routes = require 'config/route.php';
foreach ($routes as $route) {
$r->addRoute($route[0], $route[1], $route[2]);
}
});
// Build the PSR-7 server request
$psr17Factory = new Psr17Factory();
$creator = new ServerRequestCreator(
$psr17Factory, // ServerRequestFactory
$psr17Factory, // UriFactory
$psr17Factory, // UploadedFileFactory
$psr17Factory // StreamFactory
);
$request = $creator->fromGlobals();
// Dispatch
$routeInfo = $dispatcher->dispatch(
$request->getMethod(),
$request->getUri()->getPath()
);
switch ($routeInfo[0]) {
case Dispatcher::NOT_FOUND:
$controllerName = Error404::class;
break;
case Dispatcher::METHOD_NOT_ALLOWED:
$controllerName = Error405::class;
break;
case Dispatcher::FOUND:
$controllerName = $routeInfo[1];
if (isset($routeInfo[2])) {
foreach ($routeInfo[2] as $name => $value) {
$request = $request->withAttribute($name, $value);
}
}
break;
}
$controller = $container->get($controllerName);
$controller->execute($request);
```
### `route.php`
```php
<?php
use SimpleMVC\Controller;
return [
[ 'GET', '/', Controller\Home::class ],
[ 'GET', '/hello[/{name}]', Controller\Hello::class ],
[ "HTTP Verb", "/route[/optional]", Controller\EndpointController::class ]
];
```
### `container.php`
```php
<?php
use League\Plates\Engine;
use Psr\Container\ContainerInterface;
return [
'view_path' => 'src/View',
Engine::class => function(ContainerInterface $c) {
return new Engine($c->get('view_path'));
}
// PHP-DI configs
];
```
### `ControllerInterface.php`
Each controller *must* implement this interface.
```php
<?php
declare(strict_types=1);
namespace SimpleMVC\Controller;
use Psr\Http\Message\ServerRequestInterface;
interface ControllerInterface
{
public function execute(ServerRequestInterface $request);
}
```

230
docs/php/unit-tests.md Normal file
View file

@ -0,0 +1,230 @@
# PHP Unit Test
## Installation & Configuration
### Dev-Only Installation
```ps1
composer require --dev phpunit/phpunit
```
```json
"require-dev": {
"phpunit/phpunit": "<version>"
}
```
### Config
PHPUnit can be configured in a XML file called `phpunit.xml`:
```xml
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="vendor/phpunit/phpunit/phpunit.xsd"
bootstrap="vendor/autoload.php"
colors="true">
<testsuites>
<testsuit name="App\\Tests">
<directory>./test<directory>
</testsuit>
</testsuites>
<filter>
<whitelist processUncoveredFilesFromWhitelist="true">
<directory suffix=".php">./src</directory>
</whitelist>
</filter>
</phpunit>
```
## Testing
### Test Structure
**PHPUnit** tests are grouped in classes suffixed with `Test`. Each class *extends* `PHPUnit\Framework\TestCase`.
A test is a method of a *test class* prefixed with `test`.
PHPUnit is executed from the command line with `vendor/bin/phpunit --colors`.
```php
namespace App;
class Filter
{
public function isEmail(string $email): bool
{
// @todo implement
}
}
```
```php
namespace App\Test;
use PHPUnit\Framework\TestCase;
use App\Filter;
class FilterTest extends TestCase
{
public function testValidMail()
{
$filter = new Filter();
$this->assertTrue($filter->isEmail("foo@bar.com"));
}
public function testInvalidEmail()
{
$filter = new Filter();
$this->assertFalse($filter->idEmail("foo"));
}
}
```
### [PHPUnit Assertions](https://phpunit.readthedocs.io/en/9.3/assertions.html)
- `asseretTrue()`: verifies that the element is true
- `assertFalse()`: verifies that the element is false
- `assertEmpty()`: verifies that the element is empty
- `assertEquals()`: verifies that the two elements are equal
- `assertGreaterThan()`: verifies that the element is greater than ...
- `assertContains()`: verifies that the element is contained in an array
- `assertInstanceOf()`: verifies that the element is an instance of a specific class
- `assertArrayHasKey(mixed $key, array $array)`: verify that a specific key is in the array
### [PHPUnit Testing Exceptions](https://phpunit.readthedocs.io/en/9.3/writing-tests-for-phpunit.html#testing-exceptions)
```php
public function testAggiungiEsameException(string $esame)
{
$this->expectException(Exception::class);
$this->expectExceptionMessage("exception_message");
// execute code that should throw an exception
}
// https://github.com/sebastianbergmann/phpunit/issues/2484#issuecomment-648822531
public function testExceptionNotThrown()
{
$exceptionWasThrown = false;
try
{
// code that should succeed
}
catch (EsameException $e)
{
$exceptionWasThrown = true;
}
$this->assertFalse($exceptionWasThrown);
}
// same as
/**
* @doesNotPerformAssertions
*/
public function testNoExceptions(string $esame)
{
// code that should succeed (exceptions will make the test fail)
}
```
### Test Setup & Teardown (Example)
```php
class ClassTest extends TestCase
{
// initialize the test
public function setUp(): void
{
file_put_contents("/tmp/foo", "Test")
}
// reset the test
public function tearDown(): void
{
unlink("/tmp/foo")
}
public function testFoo()
{
// use temp file
}
}
```
**NOTE**: `setUp()` and `tearDown()` are called *before* and *after* each test method.
### Data Provider
```php
class DataTest extends TestCase
{
/**
* @dataProvider provider
*/
public function testAdd($a, $b, $expected)
{
$this->assertEquals($expected, $a + $b);
}
// test receives array contents as input
public function provider()
{
// must return array of arrays
return [
[0, 0, 0],
[0, 1, 1]
];
}
// test receives array of arrays as input
public function provideArrayOfArrays()
{
return [
[
[
[0, 0, 0],
[0, 1, 1]
]
]
];
}
}
```
### Mock Objects
```php
class UnitTest extends TestCase
{
public function setUp()
{
// names of mock are independent from tested class variables
$this->mock = $this->createMock(ClassName::class); // create a mock object of a class
$this->returned = $this->createMock(ClassName::class); // mock of returned object
$this->mock->method("methodName") // simulate method on mock
->with($this->equalTo(param), ...) // specify input params (one param per equalTo)
->willReturn($this->returned); // specify return value
}
public function testMethod()
{
$this->mock
->method("methodName")
->with($this->equalTo($arg)) // arg passed to the method
->willReturn(value); // actual return value for THIS case
// or
->will($this->throwException(new Exception())); // method will throw exception
// assertions
}
}
```
### Code Coverage (needs [XDebug](https://xdebug.org/))
```ps1
vendor/bin/phpunit --coverage-text # code coverage analysis in the terminal
```

152
docs/php/web.md Normal file
View file

@ -0,0 +1,152 @@
# PHP for the Web
## PHP Internal Web Server
Command Line Web Server in PHP, useful in testing phase. Limited since handles only one request at a time. **Do not use in production**.
```ps1
PHP -S <ip:post> # start web server
PHP -S <ip:post> -t /path/to/folder # execute in specified folder at specified address
PHP -S <ip:post> file.php # redirect requests to single file
```
## HTTP Methods
Handling of HTTP requests happens using the following global variables:
- `$_SERVER`: info on request headers, version, URL path and method (dict)
- `$_GET`: parameters of get request (dict)
- `$_POST`: parameters of post request (dict)
- `$_COOKIE`
- `$_FILES`: file to send to web app.
### `$_FILES`
```html
<!-- method MUST BE post -->
<!-- must have enctype="multipart/form-data" attribute -->
<form name="<name>" action="file.php" method="POST" enctype="multipart/form-data">
<input type="file" name="photo" />
<input type="submit" name="Send" />
</form>
```
Files in `$_FILES` are memorized in a system temp folder. They can be moved with `move_uploaded_file()`
```php
if (! isset($_FILES['photo']['error'])) {
http_response_code(400); # send a response code
echo'<h1>No file has been sent</h1>';
exit();
}
if ($_FILES['photo']['error'] != UPLOAD_ERR_OK) {
http_response_code(400);
echo'<h1>The sent file is invalid</h1>';
exit();
}
$path = '/path/to/' . $_FILES['photo']['name'];
if (! move_uploaded_file($_FILES['photo']['tmp_name'], $path)) {
http_response_code(400);
echo'<h1>Error while writing the file</h1>';
exit();
}
echo'<h1>File successfully sent</h1>';
```
### `$_SERVER`
Request Header Access:
```php
$_SERVER["REQUEST_METHOD"];
$_SERVER["REQUEST_URI"];
$_SERVER["SERVER_PROTOCOL"]; // HTTP Versions
$_SERVER["HTTP_ACCEPT"];
$_SERVER["HTTP_ACCEPT_ENCODING"];
$_SERVER["HTTP_CONNECTION"];
$_SERVER["HTTP_HOST"];
$_SERVER["HTTP_USER_AGENT"];
// others
```
### `$_COOKIE`
[Cookie Laws](https://www.iubenda.com/it/cookie-solution)
[Garante Privacy 8/5/2014](http://www.privacy.it/archivio/garanteprovv201405081.html)
All sites **must** have a page for the consensus about using cookies.
**Cookies** are HTTP headers used to memorize key-value info *on the client*. They are sent from the server to the client to keep track of info on the user that is visiting the website.
When a client receives a HTTP response that contains `Set-Cookie` headers it has to memorize that info and reuse them in future requests.
```http
Set-Cookie: <cookie-name>=<cookie-value>
Set-Cookie: <cookie-name>=<cookie-value>; Expires=<date>
Set-Cookie: <cookie-name>=<cookie-value>; Max-Age=<seconds>
Set-Cookie: <cookie-name>=<cookie-value>; Domain=<domain-value>
Set-Cookie: <cookie-name>=<cookie-value>; Path=<path-value>
Set-Cookie: <cookie-name>=<cookie-value>; Secure
Set-Cookie: <cookie-name>=<cookie-value>; HttpOnly
```
Anyone can modify the contents of a cookie; for this reason cookies **must not contain** *personal or sensible info*.
When a client has memorized a cookie, it is sent in successive HTTP requests through the `Cookie` header.
```http
Cookie: <cookie-name>=<cookie-value>
```
[PHP setcookie docs](https://www.php.net/manual/en/function.setcookie.php)
```php
setcookie (
string $name,
[ string $value = "" ],
[ int $expire = 0 ], // in seconds (time() + seconds)
[ string $path = "" ],
[ string $domain = "" ],
[ bool $secure = false ], // use https
[ bool $httponly = false ] // accessible only through http (no js, ...)
)
// example: memorize user-id 112 with 24h expiry for site example.com
setcookie ("User-id", "112", time() + 3600*24, "/", "example.com");
// check if a cookie exists
if(isset($_COOKIE["cookie_name"])) {}
```
### [$_SESSION](https://www.php.net/manual/en/ref.session.php)
**Sessions** are info memorized *on the server* associated to the client that makes an HTTP request.
PHP generates a cookie named `PHPSESSID` containing a *session identifier* and an *hash* generated from `IP + timestamp + pseudo-random number`.
To use the session it's necessary to recall the function `session_start()` at the beginning of a PHP script that deals with sessions.
After starting the session information in be saved in the `$_SESSION` array.
```php
$_SESSION["key"] = value; // save data in session file (serialized data)
unset($_SESSION["key"]); // delete data from the session
session_unset(); # remove all session data
session_destroy(); # destroy all of the data associated with the current session.
# It does not unset any of the global variables associated with the session, or unset the session cookie.
```
Session data is be memorized in a file by *serializing* `$_SESSION`. Files are named as `sess_PHPSESSID` in a folder (`var/lib/php/sessions` in Linux).
It's possible to modify the memorization system of PHP serialization variables by:
- modifying `session.save_handler` in `php.ini`
- writing as personalized handler with the function `session_set_save_handler()` and/or the class `SessionHandler`
## PHP Web Instructions
`http_response_code()` is used to return an HTTP response code. If no code is specified `200 OK` is returned.
`header("Location: /route")` is used to make a redirect to another UTL.

View file

@ -0,0 +1,42 @@
# Powershell Commands
```ps1
Get-Location # Gets information about the current working location or a location stack
Set-Location -Path <path> # change current working directory to specified path (DEFAULTs to ~)
Get-ChildItem -Path <path> # Gets the items and child items in one or more specified locations.
Get-Content -Path <file> # Gets the content of the item at the specified location
Write-Output # Send specified objects to the next command in the pipeline. If the command is the last in the pipeline, the objects are displayed in the console
Write-Host # Writes customized output to a host.
Clear-Host # clear shell output
New-Item -ItemType File -Path filename.ext # create empty file
New-Item -Path folder_name -Type Folder # create a folder
New-Item -ItemType SymbolicLink -Path .\link -Target .\Notice.txt # create a symlink
Move-Item -Path <source> -Destination <target> # move and/or rename files and folders
Copy-Item -Path <source> -Destination <target> # copy (and rename) files and folders
Test-Path "path" -PathType Container # check if the existing path exists and is a folder
Test-Path "path" -PathType Leaf # check if the existing path exists and is a file
# start, list , kill processes
Start-Process -FilePath <file> # open a file with the default process/program
Get-Process # Gets the processes that are running on the local computer
Stop-Process [-Id] <System.Int32[]> [-Force] [-Confirm] # Stops one or more running processes
# network
Get-NetIPConfiguration # Gets IP network configuration
Test-NetConnection <ip> # Sends ICMP echo request packets, or pings, to one or more computers
# compressing into archive
Compress-Archive -LiteralPath <PathToFiles> -DestinationPath <PathToDestination> # destination can be a folder or a .zip file
Compress-Archive -Path <PathToFiles> -Update -DestinationPath <PathToDestination> # update existing archive
# extraction from archive
Expand-Archive -LiteralPath <PathToZipFile> -DestinationPath <PathToDestination>
Expand-Archive -LiteralPath <PathToZipFile> # extract archive in folder named after the archive in the root location
# filtering stdout/stder
Select-String -Path <source> -Pattern <pattern> # Finds text in strings and files
```

View file

@ -0,0 +1,445 @@
# PowerShell Scripting
Cmdlets are formed by a verb-noun pair. Cmdlets are case-insensitive.
**It's all .NET**
A PS string is in fact a .NET System.String
All .NET methods and properties are thus available
Note that .NET functions MUST be called with parentheses while PS functions CANNOT be called with parentheses.
If you do call a cmdlet/PS function with parentheses, it is the same as passing a single parameter list.
## Screen Output
```ps1
Write-Host "message"
```
## User Input
```ps1
# Reading a value from input:
$variable = Read-Host "prompt"
```
## Variables
```ps1
# Declaration
[type]$var = value
$var = value -as [type]
[int]$a = 5
$b = 6 -as [double] # ?
# Here-string (multiline string)
@"
Here-string
$a + $b = ($a + $b)
"@
@'
Literal Here-string
'@
# Swapping
$a, $b = $b, $a
# Interpolation
Write-Host "text $variable" # single quotes will not interpolate
Write-Host (<expression>)
```
### Built-in Variables
```ps1
$True, $False # boolean
$null # empty value
$? # last program return value
$LastExitCode # Exit code of last run Windows-based program
$$ # The last token in the last line received by the session
$^ # The first token
$PID # Script's PID
$PSScriptRoot # Full path of current script directory
$MyInvocation.MyCommand.Path # Full path of current script
$Pwd # Full path of current directory
$PSBoundParameters # Bound arguments in a function, script or code block
$Args # Unbound arguments
. .\otherScriptName.ps1 # Inline another file (dot operator)
```
### Lists & Dictionaries
```ps1
$List = @(5, "ice", 3.14, $True) # Explicit syntax
$List = 2, "ice", 3.14, $True # Implicit syntax
$List = (1..10) # Inclusive range
$List = @() # Empty List
$String = $List -join 'separator'
$List = $String -split 'separator'
# List comprehensions
$List = sequence | Where-Object {$_ command} # $_ is current object
$Dict = @{"a" = "apple"; "b" = "ball"} # Dict definition
$Dict["a"] = "acorn" # Item update
# Loop through keys
foreach ($k in $Dict.keys) {
# Code here
}
```
## Flow Control
```ps1
if (condition) {
# Code here
} elseif (condition) {
# Code here
} else {
# Code here
}
```
### Switch
`Switch` has the following parameters:
- **Wildcard**: Indicates that the condition is a wildcard string. If the match clause is not a string, the parameter is ignored. The comparison is case-insensitive.
- **Exact**: Indicates that the match clause, if it is a string, must match exactly. If the match clause is not a string, this parameter is ignored. The comparison is case-insensitive.
- **CaseSensitive**: Performs a case-sensitive match. If the match clause is not a string, this parameter is ignored.
- **File**: Takes input from a file rather than a value statement. If multiple File parameters are included, only the last one is used. Each line of the file is read and evaluated by the Switch statement. The comparison is case-insensitive.
- **Regex**: Performs regular expression matching of the value to the condition. If the match clause is not a string, this parameter is ignored. The comparison is case-insensitive. The `$matches` automatic variable is available for use within the matching statement block.
```ps1
switch(variable) {
20 { "Exactly 20"; break }
{ $_ -eq 42 } { "The answer equals 42"; break }
{ $_ -like 's*' } { "Case insensitive"; break }
{ $_ -clike 's*'} { "clike, ceq, cne for case sensitive"; break }
{ $_ -notmatch '^.*$'} { "Regex matching. cnotmatch, cnotlike, ..."; break }
{ $list -contains 'x'} { "if a list contains an item"; break }
default { "Others" }
}
# syntax
switch [-regex|-wildcard|-exact][-casesensitive] (<value>)
{
"string"|number|variable|{ expression } { statement_list }
default { statement_list }
}
# or
switch [-regex|-wildcard|-exact][-casesensitive] -file filename
{
"string"|number|variable|{ expression } { statement_list }
default { statement_list }
}
```
### Loops
```ps1
# The classic for
for(setup; condition; iterator) {
# Code here
}
range | % { command }
foreach (item in iterable) {
# Code Here
}
while (condition) {
# Code here
}
do {
# Code here
} until (condition)
do {
# Code here
} while (condition)
```
### Operators
```ps1
# Conditionals
$a -eq $b # is equal to
$a -ne $b # in not equal to
$a -gt $b # greater than
$a -ge $b # greater than or equal to
$a -lt $b # less than
$a -le $b # less than or equal to
# Logical
$true -And $False
$True -Or $False
-Not $True
```
### Exception Handling
```ps1
try {} catch {} finally {}
try {} catch [System.NullReferenceException] {
echo $_.Exception | Format-List -Force
}
```
## [Functions](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_functions?view=powershell-7)
```ps1
function func() {}
# function with named parameters
function func ([type]$param=default_value, ...) { }
function func {
param([type]$param=default_value, ...)
# statements
}
# function call
func argument
func -param value
# switch parameters
function func ([switch]$param, ...) { }
func # param is $false
func -param # param is $true
```
If the function defines a `Begin`, `Process` or `End` block, all the code **must reside inside** those blocks. No code will be recognized outside the blocks if any of the blocks are defined.
If the function has a `Process` keyword, each object in `$input` is removed from `$input` and assigned to `$_`.
```ps1
function [<scope:>]<name> [([type]$parameter1[,[type]$parameter2])]
{
param([type]$parameter1 [,[type]$parameter2]) # other way to specify named parameters
dynamicparam {<statement list>}
# processing pipelines
begin {<statement list>} # runned once, at start of pipeline
process {<statement list>} # runned for each item in the pipeline
end {<statement list>} # runned once, at end of pipeline
}
```
Optionally, it's possible to provide a brief help string that describes the default value of the parameter, by adding the `PSDefaultValue` attribute to the description of the parameter, and specifying the `Help` property of `PSDefaultValue`.
```ps1
function Func {
param (
[PSDefaultValue(Help = defValue)]
$Arg = 100
)
}
```
## Script Arguments
### Parsing Script Arguments
```ps1
$args # array of passed arguments
$args[$index] # access to the arguments
$args.count # number of arguments
```
### Script Named Arguments
In `scripts.ps1`:
```ps1
param($param1, $param2, ...) # basic usage
param($param1, $param2=defvalue, ...) # with default values
param([Type] $param1, $param2, ...) # specify a type
param([Parameter(Mandatory)]$param1, $param2, ...) # setting a parameter as necessary
param([switch]$flag=$false, ...) # custom flags
```
In PowerShell:
```ps1
.\script.ps1 arg1 arg2 # order of arguments will determine which data goes in which parameter
.\script.ps1 -param2 arg2 -param1 arg1 # custom order
```
### Filters
A filter is a type of function that runs on each object in the pipeline. A filter resembles a function with all its statements in a `Process` block.
```ps1
filter [<scope:>]<name> {<statement list>}
```
## PowerShell Comment-based Help
The syntax for comment-based help is as follows:
```ps1
# .<help keyword>
# <help content>
```
or
```ps1
<#
.<help keyword>
<help content>
#>
```
Comment-based help is written as a series of comments. You can type a comment symbol `#` before each line of comments, or you can use the `<#` and `#>` symbols to create a comment block. All the lines within the comment block are interpreted as comments.
All of the lines in a comment-based help topic must be contiguous. If a comment-based help topic follows a comment that is not part of the help topic, there must be at least one blank line between the last non-help comment line and the beginning of the comment-based help.
Keywords define each section of comment-based help. Each comment-based help keyword is preceded by a dot `.`. The keywords can appear in any order. The keyword names are not case-sensitive.
### .SYNOPSIS
A brief description of the function or script. This keyword can be used only once in each topic.
### .DESCRIPTION
A detailed description of the function or script. This keyword can be used only once in each topic.
### .PARAMETER
The description of a parameter. Add a `.PARAMETER` keyword for each parameter in the function or script syntax.
Type the parameter name on the same line as the `.PARAMETER` keyword. Type the parameter description on the lines following the `.PARAMETER` keyword. Windows PowerShell interprets all text between the `.PARAMETER` line and the next keyword or the end of the comment block as part of the parameter description. The description can include paragraph breaks.
```ps1
.PARAMETER <Parameter-Name>
```
The Parameter keywords can appear in any order in the comment block, but the function or script syntax determines the order in which the parameters (and their descriptions) appear in help topic. To change the order, change the syntax.
You can also specify a parameter description by placing a comment in the function or script syntax immediately before the parameter variable name. For this to work, you must also have a comment block with at least one keyword.
If you use both a syntax comment and a `.PARAMETER` keyword, the description associated with the `.PARAMETER` keyword is used, and the syntax comment is ignored.
```ps1
<#
.SYNOPSIS
Short description here
#>
function Verb-Noun {
[CmdletBinding()]
param (
# This is the same as .Parameter
[string]$Computername
)
# Verb the Noun on the computer
}
```
### .EXAMPLE
A sample command that uses the function or script, optionally followed by sample output and a description. Repeat this keyword for each example.
### .INPUTS
The .NET types of objects that can be piped to the function or script. You can also include a description of the input objects.
### .OUTPUTS
The .NET type of the objects that the cmdlet returns. You can also include a description of the returned objects.
### .NOTES
Additional information about the function or script.
### .LINK
The name of a related topic. The value appears on the line below the `.LINK` keyword and must be preceded by a comment symbol `#` or included in the comment block.
Repeat the `.LINK` keyword for each related topic.
This content appears in the Related Links section of the help topic.
The `.Link` keyword content can also include a Uniform Resource Identifier (URI) to an online version of the same help topic. The online version opens when you use the **Online** parameter of `Get-Help`. The URI must begin with "http" or "https".
### .COMPONENT
The name of the technology or feature that the function or script uses, or to which it is related. The **Component** parameter of `Get-Help` uses this value to filter the search results returned by `Get-Help`.
### .ROLE
The name of the user role for the help topic. The **Role** parameter of `Get-Help` uses this value to filter the search results returned by `Get-Help`.
### .FUNCTIONALITY
The keywords that describe the intended use of the function. The **Functionality** parameter of `Get-Help` uses this value to filter the search results returned by `Get-Help`.
### .FORWARDHELPTARGETNAME
Redirects to the help topic for the specified command. You can redirect users to any help topic, including help topics for a function, script, cmdlet, or provider.
```ps1
# .FORWARDHELPTARGETNAME <Command-Name>
```
### .FORWARDHELPCATEGORY
Specifies the help category of the item in `.ForwardHelpTargetName`. Valid values are `Alias`, `Cmdlet`, `HelpFile`, `Function`, `Provider`, `General`, `FAQ`, `Glossary`, `ScriptCommand`, `ExternalScript`, `Filter`, or `All`. Use this keyword to avoid conflicts when there are commands with the same name.
```ps1
# .FORWARDHELPCATEGORY <Category>
```
### .REMOTEHELPRUNSPACE
Specifies a session that contains the help topic. Enter a variable that contains a **PSSession** object. This keyword is used by the [Export-PSSession](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-pssession?view=powershell-7)
cmdlet to find the help topics for the exported commands.
```ps1
# .REMOTEHELPRUNSPACE <PSSession-variable>
```
### .EXTERNALHELP
Specifies an XML-based help file for the script or function.
```ps1
# .EXTERNALHELP <XML Help File>
```
The `.ExternalHelp` keyword is required when a function or script is documented in XML files. Without this keyword, `Get-Help` cannot find the XML-based help file for the function or script.
The `.ExternalHelp` keyword takes precedence over other comment-based help keywords. If `.ExternalHelp` is present, `Get-Help` does not display comment-based help, even if it cannot find a help topic that matches the value of the `.ExternalHelp` keyword.
If the function is exported by a module, set the value of the `.ExternalHelp` keyword to a filename without a path. `Get-Help` looks for the specified file name in a language-specific subdirectory of the module directory. There are no requirements for the name of the XML-based help file for a function, but a best practice is to use the following format:
```ps1
<ScriptModule.psm1>-help.xml
```
If the function is not included in a module, include a path to the XML-based help file. If the value includes a path and the path contains UI-culture-specific subdirectories, `Get-Help` searches the subdirectories
recursively for an XML file with the name of the script or function in accordance with the language fallback standards established for Windows, just as it does in a module directory.
For more information about the cmdlet help XML-based help file format, see [How to Write Cmdlet Help](https://go.microsoft.com/fwlink/?LinkID=123415) in the MSDN library.
***
## Project Oriented Programming
### Classes
```ps1
[class]::func() # use function from a static class
[class]::attribute # access to static class attribute
```

215
docs/python/argparse.md Normal file
View file

@ -0,0 +1,215 @@
# Argpasrse Module
## Creating a parser
```py
import argparse
parser = argparse.ArgumentParser(description="description", allow_abbrev=True)
```
**Note**: All parameters should be passed as keyword arguments.
- `prog`: The name of the program (default: `sys.argv[0]`)
- `usage`: The string describing the program usage (default: generated from arguments added to parser)
- `description`: Text to display before the argument help (default: none)
- `epilog`: Text to display after the argument help (default: none)
- `parents`: A list of ArgumentParser objects whose arguments should also be included
- `formatter_class`: A class for customizing the help output
- `prefix_chars`: The set of characters that prefix optional arguments (default: -)
- `fromfile_prefix_chars`: The set of characters that prefix files from which additional arguments should be read (default: None)
- `argument_default`: The global default value for arguments (default: None)
- `conflict_handler`: The strategy for resolving conflicting optionals (usually unnecessary)
- `add_help`: Add a -h/--help option to the parser (default: True)
- `allow_abbrev`: Allows long options to be abbreviated if the abbreviation is unambiguous. (default: True)
## [Adding Arguments](https://docs.python.org/3/library/argparse.html#the-add-argument-method)
```py
ArgumentParser.add_argument("name_or_flags", nargs="...", action="...")
```
**Note**: All parameters should be passed as keyword arguments.
- `name or flags`: Either a name or a list of option strings, e.g. `foo` or `-f`, `--foo`.
- `action`: The basic type of action to be taken when this argument is encountered at the command line.
- `nargs`: The number of command-line arguments that should be consumed.
- `const`: A constant value required by some action and nargs selections.
- `default`: The value produced if the argument is absent from the command line.
- `type`: The type to which the command-line argument should be converted to.
- `choices`: A container of the allowable values for the argument.
- `required`: Whether or not the command-line option may be omitted (optionals only).
- `help`: A brief description of what the argument does.
- `metavar`: A name for the argument in usage messages.
- `dest`: The name of the attribute to be added to the object returned by `parse_args()`.
### Actions
`store`: This just stores the argument's value. This is the default action.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo')
>>> parser.parse_args('--foo 1'.split())
Namespace(foo='1')
```
`store_const`: This stores the value specified by the const keyword argument. The `store_const` action is most commonly used with optional arguments that specify some sort of flag.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', action='store_const', const=42)
>>> parser.parse_args(['--foo'])
Namespace(foo=42)
```
`store_true` and `store_false`: These are special cases of `store_const` used for storing the values True and False respectively. In addition, they create default values of False and True respectively.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', action='store_true')
>>> parser.add_argument('--bar', action='store_false')
>>> parser.add_argument('--baz', action='store_false')
>>> parser.parse_args('--foo --bar'.split())
Namespace(foo=True, bar=False, baz=True)
```
`append`: This stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. Example usage:
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', action='append')
>>> parser.parse_args('--foo 1 --foo 2'.split())
Namespace(foo=['1', '2'])
```
`append_const`: This stores a list, and appends the value specified by the const keyword argument to the list. (Note that the const keyword argument defaults to None.) The `append_const` action is typically useful when multiple arguments need to store constants to the same list. For example:
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--str', dest='types', action='append_const', const=str)
>>> parser.add_argument('--int', dest='types', action='append_const', const=int)
>>> parser.parse_args('--str --int'.split())
Namespace(types=[<class 'str'>, <class 'int'>])
```
`count`: This counts the number of times a keyword argument occurs. For example, this is useful for increasing verbosity levels:
**Note**: the default will be None unless explicitly set to 0.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--verbose', '-v', action='count', default=0)
>>> parser.parse_args(['-vvv'])
Namespace(verbose=3)
```
`help`: This prints a complete help message for all the options in the current parser and then exits. By default a help action is automatically added to the parser.
`version`: This expects a version= keyword argument in the add_argument() call, and prints version information and exits when invoked:
```py
>>> import argparse
>>> parser = argparse.ArgumentParser(prog='PROG')
>>> parser.add_argument('--version', action='version', version='%(prog)s 2.0')
>>> parser.parse_args(['--version'])
PROG 2.0
```
`extend`: This stores a list, and extends each argument value to the list. Example usage:
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument("--foo", action="extend", nargs="+", type=str)
>>> parser.parse_args(["--foo", "f1", "--foo", "f2", "f3", "f4"])
Namespace(foo=['f1', 'f2', 'f3', 'f4'])
```
### Nargs
ArgumentParser objects usually associate a single command-line argument with a single action to be taken.
The `nargs` keyword argument associates a different number of command-line arguments with a single action.
**Note**: If the nargs keyword argument is not provided, the number of arguments consumed is determined by the action.
`N` (an integer): N arguments from the command line will be gathered together into a list.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs=2)
>>> parser.add_argument('bar', nargs=1)
>>> parser.parse_args('c --foo a b'.split())
Namespace(bar=['c'], foo=['a', 'b'])
```
**Note**: `nargs=1` produces a list of one item. This is different from the default, in which the item is produced by itself.
`?`: One argument will be consumed from the command line if possible, and produced as a single item. If no command-line argument is present, the value from default will be produced.
For optional arguments, there is an additional case: the option string is present but not followed by a command-line argument. In this case the value from const will be produced.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs='?', const='c', default='d')
>>> parser.add_argument('bar', nargs='?', default='d')
>>> parser.parse_args(['XX', '--foo', 'YY'])
Namespace(bar='XX', foo='YY')
>>> parser.parse_args(['XX', '--foo'])
Namespace(bar='XX', foo='c')
>>> parser.parse_args([])
Namespace(bar='d', foo='d')
```
`*`: All command-line arguments present are gathered into a list. Note that it generally doesn't make much sense to have more than one positional argument with `nargs='*'`, but multiple optional arguments with `nargs='*'` is possible.
```py
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs='*')
>>> parser.add_argument('--bar', nargs='*')
>>> parser.add_argument('baz', nargs='*')
>>> parser.parse_args('a b --foo x y --bar 1 2'.split())
Namespace(bar=['1', '2'], baz=['a', 'b'], foo=['x', 'y'])
```
`+`: All command-line args present are gathered into a list. Additionally, an error message will be generated if there wasn't at least one command-line argument present.
```py
>>> parser = argparse.ArgumentParser(prog='PROG')
>>> parser.add_argument('foo', nargs='+')
>>> parser.parse_args(['a', 'b'])
Namespace(foo=['a', 'b'])
>>> parser.parse_args([])
usage: PROG [-h] foo [foo ...]
PROG: error: the following arguments are required: foo
```
`argparse.REMAINDER`: All the remaining command-line arguments are gathered into a list. This is commonly useful for command line utilities that dispatch to other command line utilities.
```py
>>> parser = argparse.ArgumentParser(prog='PROG')
>>> parser.add_argument('--foo')
>>> parser.add_argument('command')
>>> parser.add_argument('args', nargs=argparse.REMAINDER)
>>> print(parser.parse_args('--foo B cmd --arg1 XX ZZ'.split()))
Namespace(args=['--arg1', 'XX', 'ZZ'], command='cmd', foo='B')
```
## Parsing Arguments
```py
# Convert argument strings to objects and assign them as attributes of the namespace. Return the populated namespace.
ArgumentParser.parse_args(args=None, namespace=None)
# assign attributes to an already existing object, rather than a new Namespace object
class C:
pass
c = C()
parser = argparse.ArgumentParser()
parser.add_argument('--foo')
parser.parse_args(args=['--foo', 'BAR'], namespace=c)
c.foo # BAR
# return a dict instead of a Namespace
args = parser.parse_args(['--foo', 'BAR'])
vars(args) # {'foo': 'BAR'}
```

View file

@ -0,0 +1,78 @@
# Collections Module
``` py
# COUNTER ()
# subclass dictionary for counting hash-capable objects
from collections import Counter
Counter (sequence) # -> Counter object
# {item: num included in sequence, ...}
var = Counter (sequence)
var.most_common (n) # produce list of most common elements (most common n)
sum (var.values ()) # total of all counts
var.clear () #reset all counts
list (var) # list unique items
set (var) # convert to a set
dict (var) # convert to regular dictionary
var.items () # convert to a list of pairs (element, count)
Counter (dict (list_of_pairs)) # convert from a list of pairs
var.most_common [: - n-1: -1] # n less common elements
var + = Counter () # remove zero and negative counts
# DEFAULTDICT ()
# dictionary-like object that takes a default type as its first argument
# defaultdict will never raise a KeyError exception.
# non-existent keys return a default value (default_factory)
from collections import defaultdict
var = defaultdict (default_factory)
var.popitem () # remove and return first element
var.popitem (last = True) # remove and return last item
# OREDERDDICT ()
# subclass dictionary that "remembers" the order in which the contents are entered
# Normal dictionaries have random order
name_dict = OrderedDict ()
# OrderedDict with same elements but different order are considered different
# USERDICT ()
# pure implementation in pythondi a map that works like a normal dictionary.
# Designated to create subclasses
UserDict.data # recipient of UserDict content
# NAMEDTUPLE ()
# each namedtuple is represented by its own class
from collections import namedtuple
NomeClasse = namedtuple (NomeClasse, parameters_separated_from_space)
var = ClassName (parameters)
var.attribute # access to attributes
var [index] # access to attributes
var._fields # access to attribute list
var = class._make (iterable) # transformain namedtuple
var._asdict () # Return OrderedDict object starting from namedtuple
# DEQUE ()
# double ended queue (pronounced "deck")
# list editable on both "sides"
from collections import deque
var = deque (iterable, maxlen = num) # -> deque object
var.append (item) # add item to the bottom
var.appendleft (item) # add item to the beginning
var.clear () # remove all elements
var.extend (iterable) # add iterable to the bottom
var.extendleft (iterable) # add iterable to the beginning '
var.insert (index, item) # insert index position
var.index (item, start, stop) # returns position of item
var.count (item)
var.pop ()
var.popleft ()
var.remove (value)
var.reverse () # reverse element order
var.rotate (n) # move the elements of n steps (dx if n> 0, sx if n <0)
var.sort ()
```

83
docs/python/csv.md Normal file
View file

@ -0,0 +1,83 @@
# CSV Module
``` python
# iterate lines of csvfile
.reader (csvfile, dialect, ** fmtparams) -> reader object
# READER METHODS
.__ next __ () # returns next iterable object line as a list or dictionary
# READER ATTRIBUTES
dialect # read-only description of the dialec used
line_num # number of lines from the beginning of the iterator
fieldnames
# convert data to delimited strings
# csvfile must support .write ()
#type None converted to empty string (simplify SQL NULL dump)
.writer (csvfile, dialect, ** fmtparams) -> writer object
# WRITER METHODS
# row must be iterable of strings or numbers or of dictionaries
.writerow (row) # write row formatted according to the current dialect
.writerows (rows) # write all elements in rows formatted according to the current dialect. rows is iterable of row
# CSV METHODS
# associate dialect to name (name must be string)
.register_dialect (name, dialect, ** fmtparams)
# delete the dialect associated with name
.unregister_dialect ()
# returns the dialect associated with name
.get_dialect (name)
# list of dialects associated with name
.list_dialect (name)
# returns (if empty) or sets the limit of the csv field
.field_size_limit (new_limit)
'''
csvfile - iterable object returning a string on each __next __ () call
if csv is a file it must be opened with newline = '' (universal newline)
dialect - specify the dialect of csv (Excel, ...) (OPTIONAL)
fmtparams --override formatting parameters (OPTIONAL) https://docs.python.org/3/library/csv.html#csv-fmt-params
'''
# object operating as a reader but maps the info in each row into an OrderedDict whose keys are optional and passed through fieldnames
class csv.Dictreader (f, fieldnames = None, restket = none, restval = None, dialect, * args, ** kwargs)
'''
f - files to read
fieldnames --sequence, defines the names of the csv fields. if omitted use the first line of f
restval, restkey --se len (row)> fieldnames excess data stored in restval and restkey
additional parameters passed to the underlying reader instance
'''
class csv.DictWriter (f, fieldnames, restval = '', extrasaction, dialect, * args, ** kwargs)
'''
f - files to read
fieldnames --sequence, defines the names of the csv fields. (NECESSARY)
restval --se len (row)> fieldnames excess data stored in restval and restkey
extrasaction - if the dictionary passed to writerow () contains key not present in fieldnames extrasaction decides action to be taken (raise cause valueError, ignore ignores additional keys)
additional parameters passed to the underlying writer instance
'''
# DICTREADER METHODS
.writeheader () # write a header line of fields as specified by fieldnames
# class used to infer the format of the CSV
class csv.Sniffer
.sniff (sample, delimiters = None) #parse the sample and return a Dialect class. delimiter is a sequence of possible box delimiters
.has_header (sample) -> bool # True if first row is a series of column headings
#CONSTANTS
csv.QUOTE_ALL # instructs writer to quote ("") all fields
csv.QUOTE_MINIMAL # instructs write to quote only fields containing special characters such as delimiter, quote char ...
csv.QUOTE_NONNUMERIC # instructs the writer to quote all non-numeric fields
csv.QUOTE_NONE # instructs write to never quote fields
```

70
docs/python/ftplib.md Normal file
View file

@ -0,0 +1,70 @@
# Ftplib Module
## FTP CLASSES
```py
ftplib.FTP(host="", user="", password="", acct="")
# if HOST => connect(host)
# if USER => login(user, password, acct)
ftplib.FTP_TLS(host="", user="", password="", acct="")
```
## EXCEPTIONS
```py
ftplib.error_reply # unexpected error from server
ftplib.error_temp # temporary error (response codes 400-499)
ftplib.error_perm # permanent error (response codes 500-599)
ftplib.error_proto # error not in ftp specs
ftplib.all_errors # tuple of all exceptions
```
## FTP OBJECTS
```py
# method on text files: -lines
# method on binary files: -binary
# CONNECTION
FTP.connect(host="", port=0) # used once per instance
# DON'T CALL if host was supplied at instance creation
FTP.getwelcome() # return welcome message
FTP.login(user='anonymous', password='', acct='')
# called once per instance after connection is established
# DEFAULT PASSWORD: anonymous@
# DON'T CALL if host was supplied at instance creation
FTP.sendcmd(cmd) # send command string and return response
FTP.voidcmd(cmd) # send command string and return nothing if successful
# FILE TRANSFER
FTP.abort() # abort in progress file transfer (can fail)
FTTP.transfercmd(cmd, rest=None) # returns socket for connection
# CMD active mode: send EPRT or PORT command and CMD and accept connection
# CMD passive mode: send EPSV or PASV and start transfer command
FTP.retrbinary(cmd, callback, blocksize=8192, rest=None) # retrieve file in binary mode
# CMD: appropriate RETR command ('RETR filename')
# CALLBACK: func called on every block of data received
FTP.rertlines(cmd, callback=None)
# retrieve file or dir list in ASCII transfer mode
# CMD: appropriate RETR, LSIT (list and info of files), NLST (list of file names)
# DEFAULT CALLBACK: sys.stdout
FTP.set_pasv(value) # set passive mode if value is true, otherwise disable it
# passive mode on by default
FTP.storbinary(cmd, fp, blocksize=8192, callback=None, rest=None) # store file in binary mode
# CMD: appropriate STOR command ('STOR filename')
# FP: {file object in binary mode} read until EOF in blocks of blocksize
# CALLBACK: func called on each block after sending
FTP.storlines(cmd, fp, callback=None) # store file in ASCII transfer mode
# CMD: appropriate STOR command ('STOR filename')
# FP: {file object} read until EOF
# CALLBACK: func called on each block after sending
```

72
docs/python/itertools.md Normal file
View file

@ -0,0 +1,72 @@
# Itertools Module
``` py
# accumulate ([1,2,3,4,5]) -> 1, 3 (1 + 2), 6 (1 + 2 + 3), 10 (1 + 2 + 3 + 6), 15 (1+ 2 + 3 + 4 + 5)
# accumulate (iter, func (,)) -> iter [0], func (iter [0] + iter [1]) + func (prev + iter [2]), ...
accumulate (iterable, func (_, _))
# iterator returns elements from the first iterable,
# then proceeds to the next until the end of the iterables
# does not work if there is only one iterable
chain (* iterable)
# concatenates elements of the single iterable even if it contains sequences
chain.from_iterable (iterable)
# returns sequences of length r starting from the iterable
# items treated as unique based on their value
combinations (iterable, r)
# # returns sequences of length r starting from the iterable allowing the repetition of the elements
combinations_with_replacement (iterable, r)
# iterator filters date elements returning only those that have
# a corresponding element in selectors that is true
compress (data, selectors)
count (start, step)
# iterator returning values in infinite sequence
cycle (iterable)
# iterator discards elements of the iterable as long as the predicate is true
dropwhile (predicate, iterable)
# iterator returning values if predicate is false
filterfalse (predicate, iterable)
# iterator returns tuple (key, group)
# key is the grouping criterion
# group is a generator returning group members
groupby (iterable, key = None)
# iterator returns slices of the iterable
isslice (iterable, stop)
isslice (iterable, start, stop, step)
# returns all permutations of length r of the iterable
permutations (iterable, r = None)
# Cartesian product of iterables
# loops iterables in order of input
# [product ('ABCD', 'xy') -> Ax Ay Bx By Cx Cy Dx Dy]
# [product ('ABCD', repeat = 2) -> AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD]
product (* iterable, repetitions = 1)
# returns an object infinite times if repetition is not specified
repeat (object, repetitions)
# iterator compute func (iterable)
# used if iterable is pre-zipped sequence (seq of tuples grouping elements)
starmap (func, iterable)
# iterator returning values from iterable as long as predicate is true
takewhile (predicate, iterable)
# returns n independent iterators from the single iterable
tee (iterable, n = 2)
# produces an iterator that aggregates elements from each iterable
# if the iterables have different lengths the missing values are filled according to fillervalue
zip_longest (* iterable, fillvalue = None)
```

110
docs/python/json.md Normal file
View file

@ -0,0 +1,110 @@
# JSON Module
## JSON Format
JSON (JavaScript Object Notation) is a lightweight data-interchange format.
It is easy for humans to read and write.
It is easy for machines to parse and generate.
JSON is built on two structures:
- A collection of name/value pairs.
- An ordered list of values.
An OBJECT is an unordered set of name/value pairs.
An object begins with `{` (left brace) and ends with `}` (right brace).
Each name is followed by `:` (colon) and the name/value pairs are separated by `,` (comma).
An ARRAY is an ordered collection of values.
An array begins with `[` (left bracket) and ends with `]` (right bracket).
Values are separated by `,` (comma).
A VALUE can be a string in double quotes, or a number,
or true or false or null, or an object or an array.
These structures can be nested.
A STRING is a sequence of zero or more Unicode characters,
wrapped in double quotes, using backslash escapes.
A CHARACTER is represented as a single character string.
A STRING is very much like a C or Java string.
A NUMBER is very much like a C or Java number,
except that the octal and hexadecimal formats are not used.
WHITESPACE can be inserted between any pair of tokens.
## Usage
```python
# serialize obj as JSON formatted stream to fp
json.dump(obj, fp, cls=None, indent=None, separators=None, sort_keys=False)
# CLS: {custom JSONEncoder} -- specifies custom encoder to be used
# INDENT: {int > 0, string} -- array elements, object members pretty-printed with indent level
# SEPARATORS: {tuple} -- (item_separator, key_separator)
# [default: (', ', ': ') if indent=None, (',', ':') otherwise],
# specify (',', ':') to eliminate whitespace
# SORT_KEYS: {bool} -- if True dict sorted by key
# serialize obj as JSON formatted string
json.dumps(obj, cls=None, indent=None, separators=None, sort_keys=False)
# CLS: {custom JSONEncoder} -- specifies custom encoder to be used
# INDENT: {int > 0, string} -- array elements, object members pretty-printed with indent level
# SEPARATORS: {tuple} -- (item_separator, key_separator)
# [default: (', ', ': ') if indent=None, (',', ':') otherwise],
# specify (',', ':') to eliminate whitespace
# SORT_KEYS: {bool} -- if True dict sorted by key
# deserialize fp to python object
json.load(fp, cls=None)
# CLS: {custom JSONEncoder} -- specifies custom decoder to be used
# deserialize s (string, bytes or bytearray containing JSON doc) to python object
json.loads(s, cls=None)
# CLS: {custom JSONEncoder} -- specifies custom decoder to be used
```
## Default Decoder (`json.JSONDecoder()`)
Conversions (JSON -> Python):
- object -> dict
- array -> list
- string -> str
- number (int) -> int
- number (real) -> float
- true -> True
- false -> False
- null -> None
## Default Encoder (`json.JSONEncoder()`)
Conversions (Python -> Json):
- dict -> object
- list, tuple -> array
- str -> string
- int, float, Enums -> number
- True -> true
- False -> false
- None -> null
## Extending JSONEncoder (Example)
```python
import json
class ComplexEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, complex):
return [obj.real, obj.image]
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
```
## Retrieving Data from json dict
```python
data = json.loads(json)
data["key"] # retrieve the value associated with the key
data["outer key"]["nested key"] # nested key value retrieval
```

View file

@ -0,0 +1,167 @@
# [Beautiful Soup Library](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
## Making the Soup
```py
from bs4 import BeautifulSoup
import requests
import lxml # better html parser than built-in
response = requests.get("url") # retrieve a web page
soup = BeautifulSoup(response.text, "html.parser") # parse HTML from response w/ python default HTML parser
soup = BeautifulSoup(response.text, "lxml") # parse HTML from response w/ lxml parser
soup.prettify() # prettify parsed HTML for display
```
## Kinds of Objects
Beautiful Soup transforms a complex HTML document into a complex tree of Python objects.
### Tag
A Tag object corresponds to an XML or HTML tag in the original document
```py
soup = BeautifulSoup('<b class="boldest">Extremely bold</b>', 'html.parser') # parse HTML/XML
tag = soup.b
type(tag) # <class 'bs4.element.Tag'>
print(tag) # <b class="boldest">Extremely bold</b>
tag.name # tag name
tag["attribute"] # access to tag attribute values
tag.attrs # dict of attribue-value pairs
```
### Navigable String
A string corresponds to a bit of text within a tag. Beautiful Soup uses the `NavigableString` class to contain these bits of text.
## Navigating the Tree
### Going Down
```py
soup.<tag>.<child_tag> # navigate using tag names
<tag>.contents # direct children as a list
<tag>.children # direct children as a generator for iteration
<tag>.descendants # iterator over all children, recursive
<tag>.string # tag contents, does not have further children
# If a tag's only child is another tag, and that tag has a .string, then the parent tag is considered to have the same .string as its child
# If a tag contains more than one thing, then it's not clear what .string should refer to, so .string is defined to be None
<tag>.strings # generator to iterate over all children's strings (will list white space)
<tag>.stripped_strings # generator to iterate over all children's strings (will NOT list white space)
```
### Going Up
```py
<tag>.parent # tags direct parent (BeautifulSoup has parent None, html has parent BeautifulSoup)
<tag>.parents # iterable over all parents
```
### Going Sideways
```py
<tag>.previous_sibling
<tag>.next_sibling
<tag>.previous_siblings
<tag>.next_siblings
```
### Going Back and Forth
```py
<tag>.previous_element # whatever was parsed immediately before
<tag>.next_element # whatever was parsed immediately afterwards
<tag>.previous_elements # whatever was parsed immediately before as a list
<tag>.next_elements # whatever was parsed immediately afterwards as a list
```
## Searching the Tree
## Filter Types
```py
soup.find_all("tag") # by name
soup.find_all(["tag1", "tag2"]) # multiple tags in a list
soup.find_all(function) # based on a bool function
soup.find_all(True) # Match everything
```
## Methods
Methods arguments:
- `name` (string): tag to search for
- `attrs` (dict): attribute-value pai to search for
- `string` (string): search by string contents rather than by tag
- `limit` (int). limit number of results
- `**kwargs`: be turned into a filter on one of a tag's attributes.
```py
find_all(name, attrs, recursive, string, limit, **kwargs) # several results
find(name, attrs, recursive, string, **kwargs) # one result
find_parents(name, attrs, string, limit, **kwargs) # several results
find_parent(name, attrs, string, **kwargs) # one result
find_next_siblings(name, attrs, string, limit, **kwargs) # several results
find_next_sibling(name, attrs, string, **kwargs) # one result
find_previous_siblings(name, attrs, string, limit, **kwargs) # several results
find_previous_sibling(name, attrs, string, **kwargs) # one result
find_all_next(name, attrs, string, limit, **kwargs) # several results
find_next(name, attrs, string, **kwargs) # one result
find_all_previous(name, attrs, string, limit, **kwargs) # several results
find_previous(name, attrs, string, **kwargs) # one result
soup("html_tag") # same as soup.find_all("html_tag")
soup.find("html_tag").text # text of the found tag
soup.select("css_selector") # search for CSS selectors of HTML tags
```
## Modifying the Tree
### Changing Tag Names an Attributes
```py
<tag>.name = "new_html_tag" # modify the tag type
<tag>["attribute"] = "value" # modify the attribute value
del <tag>["attribute"] # remove the attribute
soup.new_tag("name", <attribute> = "value") # create a new tag with specified name and attributes
<tag>.string = "new content" # modify tag text content
<tag>.append(item) # append to Tag content
<tag>.extend([item1, item2]) # add every element of the list in order
<tag>.insert(position: int, item) # like .insert in Python list
<tag>.insert_before(new_tag) # insert tags or strings immediately before something else in the parse tree
<tag>.insert_after(new_tag) # insert tags or strings immediately before something else in the parse tree
<tag>.clear() # remove all tag's contents
<tag>.extract() # extract and return the tag from the tree (operates on self)
<tag>.string.extract() # extract and return the string from the tree (operates on self)
<tag>.decompose() # remove a tag from the tree, then completely destroy it and its contents
<tag>.decomposed # check if tag has be decomposed
<tag>.replace_with(item) # remove a tag or string from the tree, and replaces it with the tag or string of choice
<tag>.wrap(other_tag) # wrap an element in the tag you specify, return the new wrapper
<tag>.unwrap() # replace a tag with whatever's inside, good for stripping out markup
<tag>.smooth() # clean up the parse tree by consolidating adjacent strings
```

328
docs/python/libs/numpy.md Normal file
View file

@ -0,0 +1,328 @@
# NumPy Lib
## MOST IMPORTANT ATTRIBUTES ATTRIBUTES
```py
array.ndim # number of axes (dimensions) of the array
array.shape # dimensions of the array, tuple of integers
array.size # total number of elements in the array
array.itemsize # size in bytes of each element
array.data # buffer containing the array elements
```
## ARRAY CREATION
Unless explicitly specified `np.array` tries to infer a good data type for the array that it creates.
The data type is stored in a special dtype object.
```py
var = np.array(sequence) # creates array
var = np.asarray(sequence) # convert input to array
var = np.ndarray(*sequence) # creates multidimensional array
var = np.asanyarray(*sequence) # convert the input to an ndarray
# nested sequences will be converted to multidimensional array
var = np.zeros(ndarray.shape) # array with all zeros
var = np.ones(ndarray.shape) # array with all ones
var = np.empty(ndarray.shape) # array with random values
var = np.identity(n) # identity array (n x n)
var = np.arange(start, stop, step) # creates an array with parameters specified
var = np.linspace(start, stop, num_of_elements) # step of elements calculated based on parameters
```
## DATA TYPES FOR NDARRAYS
```py
var = array.astype(np.dtype) # copy of the array, cast to a specified type
# return TypeError if casting fails
```
The numerical `dtypes` are named the same way: a type name followed by a number indicating the number of bits per element.
| TYPE | TYPE CODE | DESCRIPTION |
|-----------------------------------|--------------|--------------------------------------------------------------------------------------------|
| int8, uint8 | i1, u1 | Signed and unsigned 8-bit (1 byte) integer types |
| int16, uint16 | i2, u2 | Signed and unsigned 16-bit integer types |
| int32, uint32 | i4, u4 | Signed and unsigned 32-bit integer types |
| int64, uint64 | i8, u8 | Signed and unsigned 32-bit integer types |
| float16 | f2 | Half-precision floating point |
| float32 | f4 or f | Standard single-precision floating point. Compatible with C float |
| float64, float128 | f8 or d | Standard double-precision floating point. Compatible with C double and Python float object |
| float128 | f16 or g | Extended-precision floating point |
| complex64, complex128, complex256 | c8, c16, c32 | Complex numbers represented by two 32, 64, or 128 floats, respectively |
| bool | ? | Boolean type storing True and False values |
| object | O | Python object type |
| string_ | `S<num>` | Fixed-length string type (1 byte per character), `<num>` is string length |
| unicode_ | `U<num>` | Fixed-length unicode type, `<num>` is length |
## OPERATIONS BETWEEN ARRAYS AND SCALARS
Any arithmetic operations between equal-size arrays applies the operation element-wise.
array `+` scalar --> element-wise addition (`[1, 2, 3] + 2 = [3, 4, 5]`)
array `-` scalar --> element-wise subtraction (`[1 , 2, 3] - 2 = [-2, 0, 1]`)
array `*` scalar --> element-wise multiplication (`[1, 2, 3] * 3 = [3, 6, 9]`)
array / scalar --> element-wise division (`[1, 2, 3] / 2 = [0.5 , 1 , 1.5]`)
array_1 `+` array_2 --> element-wise addition (`[1, 2, 3] + [1, 2, 3] = [2, 4, 6]`)
array_1 `-` array_2 --> element-wise subtraction (`[1, 2, 4] - [3 , 2, 1] = [-2, 0, 2]`)
array_1 `*` array_2 --> element-wise multiplication (`[1, 2, 3] * [3, 2, 1] = [3, 4, 3]`)
array_1 `/` array_2 --> element-wise division (`[1, 2, 3] / [3, 2, 1] = [0.33, 1, 3]`)
## SHAPE MANIPULATION
```py
np.reshape(array, new_shape) # changes the shape of the array
np.ravel(array) # returns the array flattened
array.resize(shape) # modifies the array itself
array.T # returns the array transposed
np.transpose(array) # returns the array transposed
np.swapaxes(array, first_axis, second_axis) # interchange two axes of an array
# if array is an ndarray, then a view of it is returned; otherwise a new array is created
```
## JOINING ARRAYS
```py
np.vstack((array1, array2)) # takes tuple, vertical stack of arrays (column wise)
np.hstack((array1, array2)) # takes a tuple, horizontal stack of arrays (row wise)
np.dstack((array1, array2)) # takes a tuple, depth wise stack of arrays (3rd dimension)
np.stack(*arrays, axis) # joins a sequence of arrays along a new axis (axis is an int)
np.concatenate((array1, array2, ...), axis) # joins a sequence of arrays along an existing axis (axis is an int)
```
## SPLITTING ARRAYS
```py
np.split(array, indices) # splits an array into equall7 long sub-arrays (indices is int), if not possible raises error
np.vsplit(array, indices) # splits an array equally into sub-arrays vertically (row wise) if not possible raises error
np.hsplit(array, indices) # splits an array equally into sub-arrays horizontally (column wise) if not possible raises error
np.dsplit(array, indices) # splits an array into equally sub-arrays along the 3rd axis (depth) if not possible raises error
np.array_split(array, indices) # splits an array into sub-arrays, arrays can be of different lengths
```
## VIEW()
```py
var = array.view() # creates a new array that looks at the same data
# slicing returns a view
# view shapes are separated but assignment changes all arrays
```
## COPY()
```py
var = array.copy() # creates a deep copy of the array
```
## INDEXING, SLICING, ITERATING
1-dimensional --> sliced, iterated and indexed as standard
n-dimensional --> one index per axis, index given in tuple separated by commas `[i, j] (i, j)`
dots (`...`) represent as many colons as needed to produce complete indexing tuple
- `x[1, 2, ...] == [1, 2, :, :, :]`
- `x[..., 3] == [:, :, :, :, 3]`
- `x[4, ..., 5, :] == [4, :, :, 5, :]`
iteration on first index, use .flat() to iterate over each element
- `x[*bool]` returns row with corresponding True index
- `x[condition]` return only elements that satisfy condition
- x`[[*index]]` return rows ordered by indexes
- `x[[*i], [*j]]` return elements selected by tuple (i, j)
- `x[ np.ix_( [*i], [*j] ) ]` return rectangular region
## UNIVERSAL FUNCTIONS (ufunc)
Functions that performs element-wise operations (vectorization).
```py
np.abs(array) # vectorized abs(), return element absolute value
np.fabs(array) # faster abs() for non-complex values
np.sqrt(array) # vectorized square root (x^0.5)
np.square(array) # vectorized square (x^2)
np.exp(array) # vectorized natural exponentiation (e^x)
np.log(array) # vectorized natural log(x)
np.log10(array) # vectorized log10(x)
np.log2(array) # vectorized log2(x)
np.log1p(array) # vectorized log(1 + x)
np.sign(array) # vectorized sign (1, 0, -1)
np.ceil(array) # vectorized ceil()
np.floor(array) # vectorized floor()
np.rint(array) # vectorized round() to nearest int
np.modf(array) # vectorized divmod(), returns the fractional and integral parts of element
np.isnan(array) # vectorized x == NaN, return boolean array
np.isinf(array) # vectorized test for positive or negative infinity, return boolean array
np.isfineite(array) # vectorized test fo finiteness, returns boolean array
np.cos(array) # vectorized cos(x)
np.sin(array) # vectorized sin(x)
np.tan(array) # vectorized tan(x)
np.cosh(array) # vectorized cosh(x)
np.sinh(array) # vector sinh(x)
np.tanh(array) # vectorized tanh(x)
np.arccos(array) # vectorized arccos(x)
np.arcsinh(array) # vectorized arcsinh(x)
np.arctan(array) # vectorized arctan(x)
np.arccosh(array) # vectorized arccosh(x)
np.arcsinh(array) # vectorized arcsin(x)
np.arctanh(array) # vectorized arctanh(x)
np.logical_not(array) # vectorized not(x), equivalent to -array
np.add(x_array, y_array) # vectorized addition
np.subtract(x_array, y_array) # vectorized subtraction
np.multiply(x_array, y_array) # vectorized multiplication
np.divide(x_array, y_array) # vectorized division
np.floor_divide(x_array, y_array) # vectorized floor division
np.power(x_array, y_array) # vectorized power
np.maximum(x_array, y_array) # vectorized maximum
np.minimum(x_array, y_array) # vectorized minimum
np.fmax(x_array, y_array) # vectorized maximum, ignores NaN
np.fmin(x_array, y_array) # vectorized minimum, ignores NaN
np.mod(x_array, y_array) # vectorized modulus
np.copysign(x_array, y_array) # vectorized copy sign from y_array to x_array
np.greater(x_array, y_array) # vectorized x > y
np.less(x_array, y_array) # vectorized x < y
np.greter_equal(x_array, y_array) # vectorized x >= y
np.less_equal(x_array, y_array) # vectorized x <= y
np.equal(x_array, y_array) # vectorized x == y
np.not_equal(x_array, y_array) # vectorized x != y
np.logical_and(x_array, y_array) # vectorized x & y
np.logical_or(x_array, y_array) # vectorized x | y
np.logical_xor(x_array, y_array) # vectorized x ^ y
```
## CONDITIONAL LOGIC AS ARRAY OPERATIONS
```py
np.where(condition, x, y) # return x if condition == True, y otherwise
```
## MATHEMATICAL AND STATISTICAL METHODS
`np.method(array, args)` or `array.method(args)`.
Boolean values are coerced to 1 (`True`) and 0 (`False`).
```py
np.sum(array, axis=None) # sum of array elements over a given axis
np.median(array, axis=None) # median along the specified axis
np.mean(array, axis=None) # arithmetic mean along the specified axis
np.average(array, axis=None) # weighted average along the specified axis
np.std(array, axis=None) # standard deviation along the specified axis
np.var(array, axis=None) # variance along the specified axis
np.min(array, axis=None) # minimum value along the specified axis
np.max(array, axis=None) # maximum value along the specified axis
np.argmin(array, axis=None) # indices of the minimum values along an axis
np.argmax(array, axis=None) # indices of the maximum values
np.cumsum(array, axis=None) # cumulative sum of the elements along a given axis
np.cumprod(array, axis=None) # cumulative sum of the elements along a given axis
```
## METHODS FOR BOOLEAN ARRAYS
```py
np.all(array, axis=None) # test whether all array elements along a given axis evaluate to True
np.any(array, axis=None) # test whether any array element along a given axis evaluates to True
```
## SORTING
```py
array.sort(axis=-1) # sort an array in-place (axis = None applies on flattened array)
np.sort(array, axis=-1) # return a sorted copy of an array (axis = None applies on flattened array)
```
## SET LOGIC
```py
np.unique(array) # sorted unique elements of an array
np.intersect1d(x, y) # sorted common elements in x and y
np.union1d(x, y) # sorte union of elements
np.in1d(x, y) # boolean array indicating whether each element of x is contained in y
np.setdiff1d(x, y) # Set difference, elements in x that are not in y
np.setxor1d() # Set symmetric differences; elements that are in either of the arrays, but not both
```
## FILE I/O WITH ARRAYS
```py
np.save(file, array) # save array to binary file in .npy format
np.savez(file, *array) # save several arrays into a single file in uncompressed .npz format
np.savez_compressed(file, *args, *kwargs) # save several arrays into a single file in compressed .npz format
# *ARGS: arrays to save to the file. arrays will be saved with names "arr_0", "arr_1", and so on
# **KWARGS: arrays to save to the file. arrays will be saved in the file with the keyword names
np.savetxt(file, X, fmt="%.18e", delimiter=" ") # save array to text file
# X: 1D or 2D
# FMT: Python Format Specification Mini-Language
# DELIMITER: {str} -- string used to separate values
np.load(file, allow_pickle=False) # load arrays or pickled objects from .npy, .npz or pickled files
np.loadtxt(file, dtype=float, comments="#", delimiter=None)
# DTYPE: {data type} -- data-type of the resulting array
# COMMENTS: {str} -- characters used to indicate the start of a comment. None implies no comments
# DELIMITER: {str} -- string used to separate values
```
## LINEAR ALGEBRA
```py
np.diag(array, k=0) # extract a diagonal or construct a diagonal array
# K: {int} -- k>0 diagonals above main diagonal, k<0 diagonals below main diagonal (main diagonal k = 0)
np.dot(x ,y) # matrix dot product
np.trace(array, offset=0, dtype=None, out=None) # return the sum along diagonals of the array
# OFFSET: {int} -- offset of the diagonal from the main diagonal
# dtype: {dtype} -- determines the data-type of the returned array
# OUT: {ndarray} -- array into which the output is placed
np.linalg.det(A) # compute the determinant of an array
np.linalg.eig(A) # compute the eigenvalues and right eigenvectors of a square array
np.linalg.inv(A) # compute the (multiplicative) inverse of a matrix
# A_inv satisfies dot(A, A_inv) = dor(A_inv, A) = eye(A.shape[0])
np.linalg.pinv(A) # compute the (Moore-Penrose) pseudo-inverse of a matrix
np.linalg.qr() # factor the matrix a as qr, where q is orthonormal and r is upper-triangular
np.linalg.svd(A) # Singular Value Decomposition
np.linalg.solve(A, B) # solve a linear matrix equation, or system of linear scalar equations AX = B
np.linalg.lstsq(A, B) # return the least-squares solution to a linear matrix equation AX = B
```
## RANDOM NUMBER GENERATION
```py
np.random.seed()
np.random.rand()
np.random.randn()
np.random.randint()
np.random.Generator.permutation(x) # randomly permute a sequence, or return a permuted range
np.random.Generator.shuffle(x) # Modify a sequence in-place by shuffling its contents
np.random.Generator.beta(a, b, size=None) # draw samples from a Beta distribution
# A: {float, array floats} -- Alpha, > 0
# B: {int, tuple ints} -- Beta, > 0
np.random.Generator.binomial(n, p, size=None) # draw samples from a binomial distribution
# N: {int, array ints} -- parameter of the distribution, >= 0
# P: {float, arrey floats} -- Parameter of the distribution, >= 0 and <= 1
np.random.Generator.chisquare(df, size=None)
# DF: {float, array floats} -- degrees of freedom, > 0
np.random.Generator.gamma(shape, scale=1.0, size=None) # draw samples from a Gamma distribution
# SHAPE: {float, array floats} -- shape of the gamma distribution, != 0
np.random.Generator.normal(loc=0.0, scale=1.0, Size=None) # draw random samples from a normal (Gaussian) distribution
# LOC: {float, all floats} -- mean ("centre") of distribution
# SCALE: {float, all floats} -- standard deviation of distribution, != 0
np.random.Generator.poisson(lam=1.0, size=None) # draw samples from a Poisson distribution
# LAM: {float, all floats} -- expectation of interval, >= 0
np.random.Generator.uniform(low=0.0,high=1.0, size=None) # draw samples from a uniform distribution
# LOW: {float, all floats} -- lower boundary of the output interval
# HIGH: {float, all floats} -- upper boundary of the output interval
np.random.Generator.zipf(a, size=None) # draw samples from a Zipf distribution
# A: {float, all floats} -- distribution parameter, > 1
```

646
docs/python/libs/pandas.md Normal file
View file

@ -0,0 +1,646 @@
# Pandas
## Basic Pandas Imports
```py
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
```
## SERIES
1-dimensional labelled array, axis label referred as INDEX.
Index can contain repetitions.
```py
s = Series(data, index=index, name='name')
# DATA: {python dict, ndarray, scalar value}
# NAME: {string}
s = Series(dict) # Series created from python dict, dict keys become index values
```
### INDEXING / SELECTION / SLICING
```py
s['index'] # selection by index label
s[condition] # return slice selected by condition
s[ : ] # slice endpoint included
s[ : ] = *value # modify value of entire slice
s[condition] = *value # modify slice by condition
```
## MISSING DATA
Missing data appears as NaN (Not a Number).
```py
pd.isnull(array) # return a Series index-bool indicating which indexes don't have data
pd.notnull(array) # return a Series index-bool indicating which indexes have data
array.isnull()
array.notnull()
```
### SERIES ATTRIBUTES
```py
s.values # NumPy representation of Series
s.index # index object of Series
s.name = "Series name" # renames Series object
s.index.name = "index name" # renames index
```
### SERIES METHODS
```py
pd.Series.isin(self, values) # boolean Series showing whether elements in Series matches elements in values exactly
# Conform Series to new index, new object produced unless the new index is equivalent to current one and copy=False
pd.Series.reindex(self, index=None, **kwargs)
# INDEX: {array} -- new labels / index
# METHOD: {none (don't fill gaps), pad (fill or carry values forward), backfill (fill or carry values backward)}-- hole filling method
# COPY: {bool} -- return new object even if index is same -- DEFAULT True
# FILLVALUE: {scalar} --value to use for missing values. DEFAULT NaN
pd.Series.drop(self, index=None, **kwargs) # return Series with specified index labels removed
# INPLACE: {bool} -- if true do operation in place and return None -- DEFAULT False
# ERRORS: {ignore, raise} -- If "ignore", suppress error and existing labels are dropped
# KeyError raised if not all of the labels are found in the selected axis
pd.Series.value_counts(self, normalize=False, sort=True, ascending=False, bins=None, dropna=True)
# NORMALIZE: {bool} -- if True then object returned will contain relative frequencies of unique values
# SORT: {bool} -- sort by frequency -- DEFAULT True
# ASCENDING: {bool} -- sort in ascending order -- DEFAULT False
# BINS: {int} -- group values into half-open bins, only works with numeric data
# DROPNA: {bool} -- don't include counts of NaN
```
## DATAFRAME
2-dimensional labeled data structure with columns of potentially different types.
Index and columns can contain repetitions.
```py
df = DataFrame(data, index=row_labels, columns=column_labels)
# DATA: {list, dict (of lists), nested dicts, series, dict of 1D ndarray, 2D ndarray, DataFrame}
# INDEX: {list of row_labels}
# COLUMNS: {list of column_labels}
# outer dict keys interpreted as index labels, inner dict keys interpreted as column labels
# INDEXING / SELECTION / SLICING
df[col] # column selection
df.at[row, col] # access a single value for a row/column label pair
df.iat[row, col] # access a single value for a row/column pair by integer position
df.column_label # column selection
df.loc[label] # row selection by label
df.iloc[loc] # row selection by integer location
df[ : ] # slice rows
df[bool_vec] # slice rows by boolean vector
df[condition] # slice rows by condition
df.loc[:, ["column_1", "column_2"]] # slice columns by names
df.loc[:, [bool_vector]] # slice columns by names
df[col] = *value # modify column contents, if colon is missing it will be created
df[ : ] = *value # modify rows contents
df[condition] = *value # modify contents
del df[col] # delete column
```
### DATAFRAME ATTRIBUTES
```py
df.index # row labels
df.columns # column labels
df.values # NumPy representation of DataFrame
df.index.name = "index name"
df.columns.index.name = "columns name"
df.T # transpose
```
### DATAFRAME METHODS
```py
pd.DataFrame.isin(self , values) # boolean DataFrame showing whether elements in DataFrame matches elements in values exactly
# Conform DataFrame to new index, new object produced unless the new index is equivalent to current one and copy=False
pd.DataFrame.reindex(self, index=None, columns=None, **kwargs)
# INDEX: {array} -- new labels / index
# COLUMNS: {array} -- new labels / columns
# METHOD: {none (don't fill gaps), pad (fill or carry values forward), backfill (fill or carry values backward)}-- hole filling method
# COPY: {bool} -- return new object even if index is same -- DEFAULT True
# FILLVALUE: {scalar} --value to use for missing values. DEFAULT NaN
pd.DataFrame.drop(self, index=None, columns=None, **kwargs) # Remove rows or columns by specifying label names
# INPLACE: {bool} -- if true do operation in place and return None -- DEFAULT False
# ERRORS: {ignore, raise} -- If "ignore", suppress error and existing labels are dropped
# KeyError raised if not all of the labels are found in the selected axis
```
## INDEX OBJECTS
Holds axis labels and metadata, immutable.
### INDEX TYPES
```py
pd.Index # immutable ordered ndarray, sliceable. stores axis labels
pd.Int64Index # special case of Index with purely integer labels
pd.MultiIndex # multi-level (hierarchical) index object for pandas objects
pd.PeriodINdex # immutable ndarray holding ordinal values indicating regular periods in time
pd.DatetimeIndex # nanosecond timestamps (uses Numpy datetime64)
```
### INDEX ATTRIBUTERS
```py
pd.Index.is_monotonic_increasing # Return True if the index is monotonic increasing (only equal or increasing) values
pd.Index.is_monotonic_decreasing # Return True if the index is monotonic decreasing (only equal or decreasing) values
pd.Index.is_unique # Return True if the index has unique values.
pd.Index.hasnans # Return True if the index has NaNs
```
### INDEX METHODS
```py
pd.Index.append(self, other) # append a collection of Index options together
pd.Index.difference(self, other, sort=None) # set difference of two Index objects
# SORT: {None (attempt sorting), False (don't sort)}
pd.Index.intersection(self, other, sort=None) # set intersection of two Index objects
# SORT: {None (attempt sorting), False (don't sort)}
pd.Index.union(self, other, sort=None) # set union of two Index objects
# SORT: {None (attempt sorting), False (don't sort)}
pd.Index.isin(self, values, level=None) # boolean array indicating where the index values are in values
pd.Index.insert(self, loc, item) # make new Index inserting new item at location
pd.Index.delete(self, loc) # make new Index with passed location(-s) deleted
pd.Index.drop(self, labels, errors='raise') # Make new Index with passed list of labels deleted
# ERRORS: {ignore, raise} -- If 'ignore', suppress error and existing labels are dropped
# KeyError raised if not all of the labels are found in the selected axis
pd.Index.reindex(self, target, **kwargs) # create index with target's values (move/add/delete values as necessary)
# METHOD: {none (don't fill gaps), pad (fill or carry values forward), backfill (fill or carry values backward)}-- hole filling method
```
## ARITHMETIC OPERATIONS
NumPy arrays operations preserve labels-value link.
Arithmetic operations automatically align differently indexed data.
Missing values propagate in arithmetic computations (NaN `<operator>` value = NaN)
### ADDITION
```py
self + other
pd.Series.add(self, other, fill_value=None) # add(), supports substitution of NaNs
pd,Series.radd(self, other, fill_value=None) # radd(), supports substitution of NaNs
pd.DataFrame.add(self, other, axis=columns, fill_value=None) # add(), supports substitution of NaNs
pd.DataFrame.radd(self, other, axis=columns, fill_value=None) # radd(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### SUBTRACTION
```py
self - other
pd.Series.sub(self, other, fill_value=None) # sub(), supports substitution of NaNs
pd.Series.radd(self, other, fill_value=None) # radd(), supports substitution of NaNs
ps.DataFrame.sub(self, other, axis=columns, fill_value=None) # sub(), supports substitution of NaNs
pd.DataFrame.rsub(self, other, axis=columns, fill_value=None) # rsub(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### MULTIPLICATION
```py
self * other
pd.Series.mul(self, other, fill_value=None) # mul(), supports substitution of NaNs
pd.Series.rmul(self, other, fill_value=None) # rmul(), supports substitution of NaNs
ps.DataFrame.mul(self, other, axis=columns, fill_value=None) # mul(), supports substitution of NaNs
pd.DataFrame.rmul(self, other, axis=columns, fill_value=None) # rmul(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### DIVISION (float division)
```py
self / other
pd.Series.div(self, other, fill_value=None) # div(), supports substitution of NaNs
pd.Series.rdiv(self, other, fill_value=None) # rdiv(), supports substitution of NaNs
pd.Series.truediv(self, other, fill_value=None) # truediv(), supports substitution of NaNs
pd.Series.rtruediv(self, other, fill_value=None) # rtruediv(), supports substitution of NaNs
ps.DataFrame.div(self, other, axis=columns, fill_value=None) # div(), supports substitution of NaNs
pd.DataFrame.rdiv(self, other, axis=columns, fill_value=None) # rdiv(), supports substitution of NaNs
ps.DataFrame.truediv(self, other, axis=columns, fill_value=None) # truediv(), supports substitution of NaNs
pd.DataFrame.rtruediv(self, other, axis=columns, fill_value=None) # rtruediv(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### FLOOR DIVISION
```py
self // other
pd.Series.floordiv(self, other, fill_value=None) # floordiv(), supports substitution of NaNs
pd.Series.rfloordiv(self, other, fill_value=None) # rfloordiv(), supports substitution of NaNs
ps.DataFrame.floordiv(self, other, axis=columns, fill_value=None) # floordiv(), supports substitution of NaNs
pd.DataFrame.rfloordiv(self, other, axis=columns, fill_value=None) # rfloordiv(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### MODULO
```py
self % other
pd.Series.mod(self, other, fill_value=None) # mod(), supports substitution of NaNs
pd.Series.rmod(self, other, fill_value=None) # rmod(), supports substitution of NaNs
ps.DataFrame.mod(self, other, axis=columns, fill_value=None) # mod(), supports substitution of NaNs
pd.DataFrame.rmod(self, other, axis=columns, fill_value=None) # rmod(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
### POWER
```py
other ** self
pd.Series.pow(self, other, fill_value=None) # pow(), supports substitution of NaNs
pd.Series.rpow(self, other, fill_value=None) # rpow(), supports substitution of NaNs
ps.DataFrame.pow(self, other, axis=columns, fill_value=None) # pow(), supports substitution of NaNs
pd.DataFrame.rpow(self, other, axis=columns, fill_value=None) # rpow(), supports substitution of NaNs
# OTHER: {scalar, sequence, Series, DataFrame}
# AXIS: {0, 1, index, columns} -- whether to compare by the index or columns
# FILLVALUE: {None, float} -- fill missing value
```
## ESSENTIAL FUNCTIONALITY
### FUNCTION APPLICATION AND MAPPING
NumPy ufuncs work fine with pandas objects.
```py
pd.DataFrame.applymap(self, func) # apply function element-wise
pd.DataFrame.apply(self, func, axis=0, args=()) # apply a function along an axis of a DataFrame
# FUNC: {function} -- function to apply
# AXIS: {O, 1, index, columns} -- axis along which the function is applied
# ARGS: {tuple} -- positional arguments to pass to func in addition to the array/series
# SORTING AND RANKING
pd.Series.sort_index(self, ascending=True **kwargs) # sort Series by index labels
pd.Series.sort_values(self, ascending=True, **kwargs) # sort series by the values
# ASCENDING: {bool} -- if True, sort values in ascending order, otherwise descending -- DEFAULT True
# INPALCE: {bool} -- if True, perform operation in-place
# KIND: {quicksort, mergesort, heapsort} -- sorting algorithm
# NA_POSITION {first, last} -- 'first' puts NaNs at the beginning, 'last' puts NaNs at the end
pd.DataFrame.sort_index(self, axis=0, ascending=True, **kwargs) # sort object by labels along an axis
pd.DataFrame.sort_values(self, axis=0, ascending=True, **kwargs) # sort object by values along an axis
# AXIS: {0, 1, index, columns} -- the axis along which to sort
# ASCENDING: {bool} -- if True, sort values in ascending order, otherwise descending -- DEFAULT True
# INPALCE: {bool} -- if True, perform operation in-place
# KIND: {quicksort, mergesort, heapsort} -- sorting algorithm
# NA_POSITION {first, last} -- 'first' puts NaNs at the beginning, 'last' puts NaNs at the end
```
## DESCRIPTIVE AND SUMMARY STATISTICS
### COUNT
```py
pd.Series.count(self) # return number of non-NA/null observations in the Series
pd.DataFrame.count(self, numeric_only=False) # count non-NA cells for each column or row
# NUMERIC_ONLY: {bool} -- Include only float, int or boolean data -- DEFAULT False
```
### DESCRIBE
Generate descriptive statistics summarizing central tendency, dispersion and shape of dataset's distribution (exclude NaN).
```py
pd.Series.describe(self, percentiles=None, include=None, exclude=None)
pd.DataFrame.describe(self, percentiles=None, include=None, exclude=None)
# PERCENTILES: {list-like of numbers} -- percentiles to include in output,between 0 and 1 -- DEFAULT [.25, .5, .75]
# INCLUDE: {all, None, list of dtypes} -- white list of dtypes to include in the result. ignored for Series
# EXCLUDE: {None, list of dtypes} -- black list of dtypes to omit from the result. ignored for Series
```
### MAX - MIN
```py
pd.Series.max(self, skipna=None, numeric_only=None) # maximum of the values for the requested axis
pd.Series.min(self, skipna=None, numeric_only=None) # minimum of the values for the requested axis
pd.DataFrame.max(self, axis=None, skipna=None, numeric_only=None) # maximum of the values for the requested axis
pd.DataFrame.min(self, axis=None, skipna=None, numeric_only=None) # minimum of the values for the requested axis
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### IDXMAX - IDXMIN
```py
pd.Series.idxmax(self, skipna=True) # row label of the maximum value
pd.Series.idxmin(self, skipna=True) # row label of the minimum value
pd.DataFrame.idxmax(self, axis=0, skipna=True) # Return index of first occurrence of maximum over requested axis
pd.DataFrame.idxmin(self, axis=0, skipna=True) # Return index of first occurrence of minimum over requested axis
# AXIS:{0, 1, index, columns} -- row-wise or column-wise
# SKIPNA: {bool} -- exclude NA/null values. ff an entire row/column is NA, result will be NA
```
### QUANTILE
```py
pd.Series.quantile(self, q=0.5, interpolation='linear') # return values at the given quantile
pd.DataFrame.quantile(self, q=0.5, axis=0, numeric_only=True, interpolation='linear') # return values at the given quantile over requested axis
# Q: {flaot, array} -- value between 0 <= q <= 1, the quantile(s) to compute -- DEFAULT 0.5 (50%)
# NUMERIC_ONLY: {bool} -- if False, quantile of datetime and timedelta data will be computed as well
# INTERPOLATION: {linear, lower, higher, midpoint, nearest} -- SEE DOCS
```
### SUM
```py
pd.Series.sum(self, skipna=None, numeric_only=None, min_count=0) # sum of the values
pd.DataFrame.sum(self, axis=None, skipna=None, numeric_only=None, min_count=0) # sum of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
# MIN_COUNT: {int} -- required number of valid values to perform the operation. if fewer than min_count non-NA values are present the result will be NA
```
### MEAN
```py
pd.Series.mean(self, skipna=None, numeric_only=None) # mean of the values
pd.DataFrame.mean(self, axis=None, skipna=None, numeric_only=None) # mean of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### MEDIAN
```py
pd.Series.median(self, skipna=None, numeric_only=None) # median of the values
pd.DataFrame.median(self, axis=None, skipna=None, numeric_only=None) # median of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### MAD (mean absolute deviation)
```py
pd.Series.mad(self, skipna=None) # mean absolute deviation
pd.DataFrame.mad(self, axis=None, skipna=None) # mean absolute deviation of the values for the requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
```
### VAR (variance)
```py
pd.Series.var(self, skipna=None, numeric_only=None) # unbiased variance
pd.DataFrame.var(self, axis=None, skipna=None, ddof=1, numeric_only=None) # unbiased variance over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
# DDOF: {int} -- Delta Degrees of Freedom. divisor used in calculations is N - ddof (N represents the number of elements) -- DEFAULT 1
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### STD (standard deviation)
```py
pd.Series.std(self, skipna=None, ddof=1, numeric_only=None) # sample standard deviation
pd.Dataframe.std(self, axis=None, skipna=None, ddof=1, numeric_only=None) # sample standard deviation over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
# DDOF: {int} -- Delta Degrees of Freedom. divisor used in calculations is N - ddof (N represents the number of elements) -- DEFAULT 1
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### SKEW
```py
pd.Series.skew(self, skipna=None, numeric_only=None) # unbiased skew Normalized bt N-1
pd.DataFrame.skew(self, axis=None, skipna=None, numeric_only=None) # unbiased skew over requested axis Normalized by N-1
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### KURT
Unbiased kurtosis over requested axis using Fisher's definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
```py
pd.Series.kurt(self, skipna=None, numeric_only=None)
pd.Dataframe.kurt(self, axis=None, skipna=None, numeric_only=None)
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values when computing the result
# NUMERIC_ONLY: {bool} -- include only float, int, boolean columns, not implemented for Series
```
### CUMSUM (cumulative sum)
```py
pd.Series.cumsum(self, skipna=True) # cumulative sum
pd.Dataframe.cumsum(self, axis=None, skipna=True) # cumulative sum over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
```
### CUMMAX - CUMMIN (cumulative maximum - minimum)
```py
pd.Series.cummax(self, skipna=True) # cumulative maximum
pd.Series.cummin(self, skipna=True) # cumulative minimum
pd.Dataframe.cummax(self, axis=None, skipna=True) # cumulative maximum over requested axis
pd.Dataframe.cummin(self, axis=None, skipna=True) # cumulative minimum over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
```
### CUMPROD (cumulative product)
```py
pd.Series.cumprod(self, skipna=True) # cumulative product
pd.Dataframe.cumprod(self, axis=None, skipna=True) # cumulative product over requested axis
# AXIS: {0, 1, index, columns} -- axis for the function to be applied on
# SKIPNA: {bool} -- exclude NA/null values. if an entire row/column is NA, the result will be NA
```
### DIFF
Calculates the difference of a DataFrame element compared with another element in the DataFrame.
(default is the element in the same column of the previous row)
```py
pd.Series.diff(self, periods=1)
pd.DataFrame.diff(self, periods=1, axis=0)
# PERIODS: {int} -- Periods to shift for calculating difference, accepts negative values -- DEFAULT 1
# AXIS: {0, 1, index, columns} -- Take difference over rows or columns
```
### PCT_CHANGE
Percentage change between the current and a prior element.
```py
pd.Series.Pct_change(self, periods=1, fill_method='pad', limit=None, freq=None)
pd.Dataframe.pct_change(self, periods=1, fill_method='pad', limit=None)
# PERIODS:{int} -- periods to shift for forming percent change
# FILL_METHOD: {str, pda} -- How to handle NAs before computing percent changes -- DEFAULT pad
# LIMIT: {int} -- number of consecutive NAs to fill before stopping -- DEFAULT None
```
## HANDLING MISSING DATA
### FILTERING OUT MISSING DATA
```py
pd.Series.dropna(self, inplace=False) # return a new Series with missing values removed
pd.DataFrame.dropna(axis=0, how='any', tresh=None, subset=None, inplace=False) # return a new DataFrame with missing values removed
# AXIS: {tuple, list} -- tuple or list to drop on multiple axes. only a single axis is allowed
# HOW: {any, all} -- determine if row or column is removed from DataFrame (ANY = if any NA present, ALL = if all values are NA). DEFAULT any
# TRESH: {int} -- require that many non-NA values
# SUBSET: {array} -- labels along other axis to consider
# INPLACE: {bool} -- if True, do operation inplace and return None -- DEFAULT False
```
### FILLING IN MISSING DATA
Fill NA/NaN values using the specified method.
```py
pd.Series.fillna(self, value=None, method=None, inplace=False, limit=None)
pd.DataFrame.fillna(self, value=None, method=None, axis=None, inplace=False, limit=None)
# VALUE: {scalar, dict, Series, DataFrame} -- value to use to fill holes, dict/Series/DataFrame specifying which value to use for each index or column
# METHOD: {backfill, pad, None} -- method to use for filling holes -- DEFAULT None
# AXIS: {0, 1, index, columns} -- axis along which to fill missing values
# INPLACE: {bool} -- if true fill in-place (will modify views of object) -- DEFAULT False
# LIMIT: {int} -- maximum number of consecutive NaN values to forward/backward fill -- DEFAULT None
```
## HIERARCHICAL INDEXING (MultiIndex)
Enables storing and manipulation of data with an arbitrary number of dimensions.
In lower dimensional data structures like Series (1d) and DataFrame (2d).
### MULTIIINDEX CREATION
```py
pd.MultiIndex.from_arrays(*arrays, names=None) # convert arrays to MultiIndex
pd.MultiIndex.from_tuples(*arrays, names=None) # convert tuples to MultiIndex
pd.MultiIndex.from_frame(df, names=None) # convert DataFrame to MultiIndex
pd.MultiIndex.from_product(*iterables, names=None) # MultiIndex from cartesian product of iterables
pd.Series(*arrays) # Index constructor makes MultiIndex from Series
pd.DataFrame(*arrays) # Index constructor makes MultiINdex from DataFrame
```
### MULTIINDEX LEVELS
Vector of label values for requested level, equal to the length of the index.
```py
pd.MultiIndex.get_level_values(self, level)
```
### PARTIAL AND CROSS-SECTION SELECTION
Partial selection "drops" levels of the hierarchical index in the result in a completely analogous way to selecting a column in a regular DataFrame.
```py
pd.Series.xs(self, key, axis=0, level=None, drop_level=True) # cross-section from Series
pd.DataFrame.xs(self, key, axis=0, level=None, drop_level=True) # cross-section from DataFrame
# KEY: {label, tuple of label} -- label contained in the index, or partially in a MultiIndex
# AXIS: {0, 1, index, columns} -- axis to retrieve cross-section on -- DEFAULT 0
# LEVEL: -- in case of key partially contained in MultiIndex, indicate which levels are used. Levels referred by label or position
# DROP_LEVEL: {bool} -- If False, returns object with same levels as self -- DEFAULT True
```
### INDEXING, SLICING
Multi index keys take the form of tuples.
```py
df.loc[('lvl_1', 'lvl_2', ...)] # selection of single row
df.loc[('idx_lvl_1', 'idx_lvl_2', ...), ('col_lvl_1', 'col_lvl_2', ...)] # selection of single value
df.loc['idx_lvl_1':'idx_lvl_1'] # slice of rows (aka partial selection)
df.loc[('idx_lvl_1', 'idx_lvl_2') : ('idx_lvl_1', 'idx_lvl_2')] # slice of rows with levels
```
### REORDERING AND SORTING LEVELS
```py
pd.MultiIndex.swaplevel(self, i=-2, j=-1) # swap level i with level j
pd.Series.swaplevel(self, i=-2, j=-1) # swap levels i and j in a MultiIndex
pd.DataFrame.swaplevel(self, i=-2, j=-1, axis=0) # swap levels i and j in a MultiIndex on a partivular axis
pd.MultiIndex.sortlevel(self, level=0, ascending=True, sort_remaining=True) # sort MultiIndex at requested level
# LEVEL: {str, int, list-like} -- DEFAULT 0
# ASCENDING: {bool} -- if True, sort values in ascending order, otherwise descending -- DEFAULT True
# SORT_REMAINING: {bool} -- sort by the remaining levels after level
```
## DATA LOADING, STORAGE FILE FORMATS
```py
pd.read_fwf(filepath, colspecs='infer', widths=None, infer_nrows=100) # read a table of fixed-width formatted lines into DataFrame
# FILEPATH: {str, path object} -- any valid string path is acceptable, could be a URL. Valid URLs: http, ftp, s3, and file
# COLSPECS: {list of tuple (int, int), 'infer'} -- list of tuples giving extents of fixed-width fields of each line as half-open intervals { [from, to) }
# WIDTHS: {list of int} -- list of field widths which can be used instead of "colspecs" if intervals are contiguous
# INFER_ROWS: {int} -- number of rows to consider when letting parser determine colspecs -- DEFAULT 100
pd.read_excel() # read an Excel file into a pandas DataFrame
pd.read_json() # convert a JSON string to pandas object
pd.read_html() # read HTML tables into a list of DataFrame objects
pd.read_sql() # read SQL query or database table into a DataFrame
pd.read_csv(filepath, sep=',', *args, **kwargs ) # read a comma-separated values (csv) file into DataFrame
pd.read_table(filepath, sep='\t', *args, **kwargs) # read general delimited file into DataFrame
# FILEPATH: {str, path object} -- any valid string path is acceptable, could be a URL. Valid URLs: http, ftp, s3, and file
# SEP: {str} -- delimiter to use -- DEFAULT \t (tab)
# HEADER {int, list of int, 'infer'} -- row numbers to use as column names, and the start of the data -- DEFAULT 'infer'
# NAMES:{array} -- list of column names to use -- DEFAULT None
# INDEX_COL: {int, str, False, sequnce of int/str, None} -- Columns to use as row labels of DataFrame, given as string name or column index -- DEFAULT None
# SKIPROWS: {list-like, int, callable} -- Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file
# NA_VALUES: {scalar, str, list-like, dict} -- additional strings to recognize as NA/NaN. if dict passed, specific per-column NA values
# THOUSANDS: {str} -- thousand separator
# *ARGS, **KWARGS -- SEE DOCS
# write object to a comma-separated values (csv) file
pd.DataFrame.to_csv(self, path_or_buf, sep=',', na_rep='', columns=None, header=True, index=True, encoding='utf-8', line_terminator=None, decimal='.', *args, **kwargs)
# SEP: {str len 1} -- Field delimiter for the output file
# NA_REP: {str} -- missing data representation
# COLUMNS: {sequence} -- colums to write
# HEADER: {bool, list of str} -- write out column names. if list of strings is given its assumed to be aliases for column names
# INDEX: {bool, list of str} -- write out row names (index)
# ENCODING: {str} -- string representing encoding to use -- DEFAULT "utf-8"
# LINE_TERMINATOR: {str} -- newline character or character sequence to use in the output file -- DEFAULT os.linesep
# DECIMAL: {str} -- character recognized as decimal separator (in EU ,)
pd.DataFrame.to_excel()
pd.DataFrame.to_json()
pd.DataFrame.to_html()
pd.DataFrame.to_sql()
```

View file

@ -0,0 +1,146 @@
# Requests Lib
## GET REQUEST
Get or retrieve data from specified resource
```py
response = requests.get('URL') # returns response object
# PAYLOAD -> valuable information of response
response.status_code # http status code
```
The response message consists of:
- status line which includes the status code and reason message
- response header fields (e.g., Content-Type: text/html)
- empty line
- optional message body
```text
1xx -> INFORMATIONAL RESPONSE
2xx -> SUCCESS
200 OK -> request successful
3xx -> REDIRECTION
4xx -> CLIENT ERRORS
404 NOT FOUND -> resource not found
5xx -> SERVER ERRORS
```
```py
# raise exception HTTPError for error status codes
response.raise_for_status()
response.content # raw bytes of payload
response.encoding = 'utf-8' # specify encoding
response.text # string payload (serialized JSON)
response.json() # dict of payload
response.headers # response headers (dict)
```
### QUERY STRING PARAMETERS
```py
response = requests.get('URL', params={'q':'query'})
response = requests.get('URL', params=[('q', 'query')])
response = requests.get('URL', params=b'q=query')
```
### REQUEST HEADERS
```py
response = requests.get(
'URL',
params={'q': 'query'},
headers={'header': 'header_query'}
)
```
## OTHER HTTP METHODS
### DATA INPUT
```py
# requests that entity enclosed be stored as a new subordinate of the web resource identified by the URI
requests.post('URL', data={'key':'value'})
# requests that the enclosed entity be stored under the supplied URI
requests.put('URL', data={'key':'value'})
# applies partial modification
requests.patch('URL', data={'key':'value'})
# deletes specified resource
requests.delete('URL')
# ask for a response but without the response body (only headers)
requests.head('URL')
# returns supported HTTP methods of the server
requests.options('URL')
```
### SENDING JSON DATA
```py
requests.post('URL', json={'key': 'value'})
```
### INSPECTING THE REQUEST
```py
# requests lib prepares the requests before sending it
response = requests.post('URL', data={'key':'value'})
response.request.something # inspect request field
```
## AUTHENTICATION
```py
requests.get('URL', auth=('username', 'password')) # use implicit HTTP Basic Authorization
# explicit HTTP Basic Authorization and other
from requests.auth import HTTPBasicAuth, HTTPDigestAuth, HTTPProxyAuth
from getpass import getpass
requests.get('URL', auth=HTTPBasicAuth('username', getpass()))
```
### PERSONALIZED AUTH
```py
from requests.auth import AuthBase
class TokenAuth(AuthBase):
"custom authentication scheme"
def __init__(self, token):
self.token = token
def __call__(self, r):
"""Attach API token to custom auth"""
r.headers['X-TokenAuth'] = f'{self.token}'
return r
requests.get('URL', auth=TokenAuth('1234abcde-token'))
```
### DISABLING SSL VERIFICATION
```py
requests.get('URL', verify=False)
```
## PERFORMANCE
### REQUEST TIMEOUT
```py
# raise Timeout exception if request times out
requests.get('URL', timeout=(connection_timeout, read_timeout))
```
### MAX RETRIES
```py
from requests.adapters import HTTPAdapter
URL_adapter = HTTPAdapter(max_retries = int)
session = requests.Session()
# use URL_adapter for all requests to URL
session.mount('URL', URL_adapter)
```

218
docs/python/libs/seaborn.md Normal file
View file

@ -0,0 +1,218 @@
# Seaborn Lib
## Basic Imports For Seaborn
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# set aesthetic parameters in one step
sns.set(style='darkgrid')
#STYLE: {None, darkgrid, whitegrid, dark, white, ticks}
```
## REPLOT (relationship)
```python
sns.replot(x='name_in_data', y='name_in_data', hue='point_color', size='point_size', style='point_shape', data=data)
# HUE, SIZE and STYLE: {name in data} -- used to differentiate points, a sort-of 3rd dimension
# hue behaves differently if the data is categorical or numerical, numerical uses a color gradient
# SORT: {False, True} -- avoid sorting data in function of x
# CI: {None, sd} -- avoid computing confidence intervals or plot standard deviation
# (aggregate multiple measurements at each x value by plotting the mean and the 95% confidence interval around the mean)
# ESTIMATOR: {None} -- turn off aggregation of multiple observations
# MARKERS: {True, False} -- evidenziate observations with dots
# DASHES: {True, False} -- evidenziate observations with dashes
# COL, ROW: {name in data} -- categorical variables that will determine the grid of plots
# COL_WRAP: {int} -- "Wrap" the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet.
# SCATTERPLOT
# depicts the joint distribution of two variables using a cloud of points
# kind can be omitted since scatterplot is the default for replot
sns.replot(kind='scatter') # calls scatterplot()
sns.scatterplot() # underlying axis-level function of replot()
```
### LINEPLOT
Using semantics in lineplot will determine the aggregation of data.
```python
sns.replot(ci=None, sort=bool, kind='line')
sns.lineplot() # underlying axis-level function of replot()
```
## CATPLOT (categorical)
Categorical: divided into discrete groups.
```python
sns.catplot(x='name_in_data', y='name_in_data', data=data)
# HUE: {name in data} -- used to differenziate points, a sort-of 3rd dimension
# COL, ROW: {name in data} -- categorical variables that will determine the grid of plots
# COL_WRAP: {int} -- "Wrap" the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet.
# ORDER, HUE_ORDER: {list of strings} -- order of categorical levels of the plot
# ROW_ORDER, COL_ORDER: {list of strings} -- order to organize the rows and/or columns of the grid in
# ORIENT: {'v', 'h'} -- Orientation of the plot (can also swap x&y assignment)
# COLOR: {matplotlib color} -- Color for all of the elements, or seed for a gradient palette
# CATEGORICAL SCATTERPLOT - STRIPPLOT
# adjust the positions of points on the categorical axis with a small amount of random “jitter”
sns.catplot(kind='strip', jitter=float)
sns.stripplot()
# SIZE: {float} -- Diameter of the markers, in points
# JITTER: {False, float} -- magnitude of points jitter (distance from axis)
```
### CATEGORICAL SCATTERPLOT - SWARMPLOT
Adjusts the points along the categorical axis preventing overlap.
```py
sns.catplot(kind='swarm')
sns.swarmplot()
# SIZE: {float} -- Diameter of the markers, in points
# CATEGORICAL DISTRIBUTION - BOXPLOT
# shows the three quartile values of the distribution along with extreme values
sns.catplot(kind='box')
sns.boxplot()
# HUE: {name in data} -- box for each level of the semantic moved along the categorical axis so they dont overlap
# DODGE: {bool} -- whether elements should be shifted along the categorical axis if hue is used
```
### CATEGORICAL DISTRIBUTION - VIOLINPLOT
Combines a boxplot with the kernel density estimation procedure.
```py
sns.catplot(kind='violin')
sns.violonplot()
```
### CATEGORICAL DISTRIBUTION - BOXENPLOT
Plot similar to boxplot but optimized for showing more information about the shape of the distribution.
It is best suited for larger datasets.
```py
sns.catplot(kind='boxen')
sns.boxenplot()
```
### CATEGORICAL ESTIMATE - POINTPLOT
Show point estimates and confidence intervals using scatter plot glyphs.
```py
sns.catplot(kind='point')
sns.pointplot()
# CI: {float, sd} -- size of confidence intervals to draw around estimated values, sd -> standard deviation
# MARKERS: {string, list of strings} -- markers to use for each of the hue levels
# LINESTYLES: {string, list of strings} -- line styles to use for each of the hue levels
# DODGE: {bool, float} -- amount to separate the points for each hue level along the categorical axis
# JOIN: {bool} -- if True, lines will be drawn between point estimates at the same hue level
# SCALE: {float} -- scale factor for the plot elements
# ERRWIDTH: {float} -- thickness of error bar lines (and caps)
# CAPSIZE: {float} -- width of the "caps" on error bars
```
### CATEGORICAL ESTIMATE - BARPLOT
Show point estimates and confidence intervals as rectangular bars.
```py
sns.catplot(kind='bar')
sns.barplot()
# CI: {float, sd} -- size of confidence intervals to draw around estimated values, sd -> standard deviation
# ERRCOLOR: {matplotlib color} -- color for the lines that represent the confidence interval
# ERRWIDTH: {float} -- thickness of error bar lines (and caps)
# CAPSIZE: {float} -- width of the "caps" on error bars
# DODGE: {bool} -- whether elements should be shifted along the categorical axis if hue is used
```
### CATEGORICAL ESTIMATE - COUNTPLOT
Show the counts of observations in each categorical bin using bars.
```py
sns.catplot(kind='count')
sns.countplot()
# DODGE: {bool} -- whether elements should be shifted along the categorical axis if hue is used
```
## UNIVARIATE DISTRIBUTIONS
### DISTPLOT
Flexibly plot a univariate distribution of observations
```py
# A: {series, 1d-array, list}
sns.distplot(a=data)
# BINS: {None, arg for matplotlib hist()} -- specification of hist bins, or None to use Freedman-Diaconis rule
# HIST: {bool} - whether to plot a (normed) histogram
# KDE: {bool} - whether to plot a gaussian kernel density estimate
# HIST_KWD, KDE_KWD, RUG_KWD: {dict} -- keyword arguments for underlying plotting functions
# COLOR: {matplotlib color} -- color to plot everything but the fitted curve in
```
### RUGPLOT
Plot datapoints in an array as sticks on an axis.
```py
# A: {vector} -- 1D array of observations
sns.rugplot(a=data) # -> axes obj with plot on it
# HEIGHT: {scalar} -- height of ticks as proportion of the axis
# AXIS: {'x', 'y'} -- axis to draw rugplot on
# AX: {matplotlib axes} -- axes to draw plot into, otherwise grabs current axes
```
### KDEPLOT
Fit and plot a univariate or bivariate kernel density estimate.
```py
# DATA: {1D array-like} -- input data
sns.kdeplot(data=data)
# DATA2 {1D array-like} -- second input data. if present, a bivariate KDE will be estimated.
# SHADE: {bool} -- if True, shade-in the area under KDE curve (or draw with filled contours is bivariate)
```
## BIVARIATE DISTRIBUTION
### JOINTPLOT
Draw a plot of two variables with bivariate and univariate graphs.
```py
# X, Y: {string, vector} -- data or names of variables in data
sns.jointplot(x=data, y=data)
# DATA:{pandas DataFrame} -- DataFrame when x and y are variable names
# KIND: {'scatter', 'reg', 'resid', 'kde', 'hex'} -- kind of plot to draw
# COLOR: {matplotlib color} -- color used for plot elements
# HEIGHT: {numeric} -- size of figure (it will be square)
# RATIO: {numeric} -- ratio of joint axes height to marginal axes height
# SPACE: {numeric} -- space between the joint and marginal axes
# JOINT_KWD, MARGINAL_KWD, ANNOT_KWD: {dict} -- additional keyword arguments for the plot components
```
## PAIR-WISE RELATIONISPS IN DATASET
### PAIRPLOT
Plot pairwise relationships in a dataset.
```py
# DATA: {pandas DataFrame} -- tidy (long-form) dataframe where each column is a variable and each row is an observation
sns.pairplot(data=pd.DataFrame)
# HUE: {string (variable name)} -- variable in data to map plot aspects to different colors
# HUE_ORDER: {list of strings} -- order for the levels of the hue variable in the palette
# VARS: {list of variable names} -- variables within data to use, otherwise every column with numeric datatype
# X_VARS, Y_VARS: {list of variable names} -- variables within data to use separately for rows and columns of figure
# KIND: {'scatter', 'reg'} -- kind of plot for the non-identity relationships
# DIAG_KIND: {'auto', 'hist', 'kde'} -- Kind of plot for the diagonal subplots. default depends hue
# MARKERS: {matplotlib marker or list}
# HEIGHT:{scalar} -- height (in inches) of each facet
# ASPECT: {scalar} -- aspect * height gives the width (in inches) of each facet
```

579
docs/python/libs/tkinter.md Normal file
View file

@ -0,0 +1,579 @@
# Tkinter Module/Library
## Standard Imports
```py
from tkinter import * # import Python Tk Binding
from tkinter import ttk # import Themed Widgets
```
## GEOMETRY MANAGEMENT
Putting widgets on screen
master widget --> top-level window, frame
slave widget --> widgets contained in master widget
geometry managers determine size and oder widget drawing properties
## EVENT HANDLING
event loop receives events from the OS
customizable events provide a callback as a widget configuration
```py
widget.bind('event', function) # method to capture any event and than execute an arbitrary piece of code (generally a function or lambda)
```
VIRTUAL EVENT --> hig level event generated by widget (listed in widget docs)
## WIDGETS
Widgets are objects and all things on screen. All widgets are children of a window.
```py
widget_name = tk_object(parent_window) # widget is inserted into widget hierarchy
```
## FRAME WIDGET
Displays a single rectangle, used as container for other widgets
```py
frame = ttk.Frame(parent, width=None, height=None, borderwidth=num:int)
# BORDERWIDTH: sets frame border width (default: 0)
# width, height MUST be specified if frame is empty, otherwise determined by parent geometry manager
```
### FRAME PADDING
Extra space inside widget (margin).
```py
frame['padding'] = num # same padding for every border
frame['padding'] = (horizontal, vertical) # set horizontal THEN vertical padding
frame['padding'] = (left, top, right, bottom) # set left, top, right, bottom padding
# RELIEF: set border style, [flat (default), raised, sunken, solid, ridge, groove]
frame['relief'] = border_style
```
## LABEL WIDGET
Display text or image without interactivity.
```py
label = ttk.Label(parent, text='label text')
```
### DEFINING UPDATING LABEL
```py
var = StringVar() # variable containing text, watches for changes. Use get, set methods to interact with the value
label['textvariable'] = var # attach var to label (only of type StringVar)
var.set("new text label") # change label text
```
### DISPLAY IMAGES (2 steps)
```py
image = PhotoImage(file='filename') # create image object
label['image'] = image # use image config
```
### DISPLAY IMAGE AND-OR TEXT
```py
label['compound'] = value
```
Compound value:
- none (img if present, text otherwise)
- text (text only)
- image (image only)
- center (text in center of image)
- top (image above text), left, bottom, right
## LAYOUT
Specifies edge or corner that the label is attached.
```py
label['anchor'] = compass_direction #compass_direction: n, ne, e, se, s, sw, w, nw, center
```
### MULTI-LINE TEXT WRAP
```py
# use \n for multi line text
label['wraplength'] = size # max line length
```
### CONTROL TEXT JUSTIFICATION
```py
label['justify'] = value #value: left, center, right
label['relief'] = label_style
label['foreground'] = color # color passed with name or HEX RGB codes
label['background'] = color # color passed with name or HEX RGB codes
```
### FONT STYLE (use with caution)
```py
# used outside style option
label['font'] = font
```
Fonts:
- TkDefaultFont -- default for all GUI items
- TkTextFont -- used for entry widgets, listboxes, etc
- TkFixedFont -- standard fixed-width font
- TkMenuFont -- used for menu items
- TkHeadingFont -- for column headings in lists and tables
- TkCaptionFont -- for window and dialog caption bars
- TkSmallCaptionFont -- smaller caption for subwindows or tool dialogs
- TkIconFont -- for icon caption
- TkTooltipFont -- for tooltips
## BUTTON
Press to perform some action
```py
button = ttk.Button(parent, text='button_text', command=action_performed)
```
### TEXT or IMAGE
```py
button['text/textvariable'], button['image'], button['compound']
```
### BUTTON INVOCATION
```py
button.invoke() # button activation in the program
```
### BUTTON STATE
Activate or deactivate the widget.
```py
button.state(['disabled']) # set the disabled flag, disabling the button
button.state(['!disabled']) # clear the disabled flag
button.instate(['disabled']) # return true if the button is disabled, else false
button.instate(['!disabled']) # return true if the button is not disabled, else false
button.instate(['!disabled'], cmd) # execute 'cmd' if the button is not disabled
# WIDGET STATE FLAGS: active, disabled, focus, pressed, selected, background, readonly, alternate, invalid
```
## CHECKBUTTON
Button with binary value of some kind (e.g a toggle) and also invokes a command callback
```py
checkbutton_var = TkVarType
check = ttk.Checkbutton(parent, text='button text', command=action_performed, variable=checkbutton_var, onvalue=value_on, offvalue=value_off)
```
### WIDGET VALUE
Variable option holds value of button, updated by widget toggle.
DEFAULT: 1 (while checked), 0 (while unchecked)
`onvalue`, `offvalue` are used to personalize checked and unchecked values
if linked variable is empty or different from on-off value the state flag is set to alternate
checkbutton won't set the linked variable (MUST be done in the program)
### CONFIG OPTIONS
```py
check['text/textvariable']
check['image']
check['compound']
check.state(['flag'])
check.instate(['flag'])
```
## RADIOBUTTON
Multiple-choice selection (good if options are few).
```py
#RADIOBUTTON CREATION (usually as a set)
radio_var = TkVarType
radio_1 = ttk.Radiobutton(parent, text='button text', variable=radio_var, value=button_1_value)
radio_2 = ttk.Radiobutton(parent, text='button text', variable=radio_var, value=button_2_value)
radio_3 = ttk.Radiobutton(parent, text='button text', variable=radio_var, value=button_3_value)
# if linked value does not exist flag state is alternate
# CONFIG OPTIONS
radio['text/textvariable']
radio['image']
radio['compound']
radio.state(['flag'])
radio.instate(['flag'])
```
## ENTRY
Single line text field accepting a string.
```py
entry_var = StringVar()
entry = ttk.Entry(parent, textvariable=entry_var, width=char_num, show=symbol)
# SHOW: replaces the entry test with symbol, used for password
# entries don't have an associated label, needs a separate widget
```
### CHANGE ENTRY VALUE
```py
entry.get() # returns entry value
entry.delete(start, 'end') # delete between two indices, 0-based
entry.insert(index, 'text value') # insert new text at a given index
```
### ENTRY CONFIG OPTIONS
```py
radio.state(['flag'])
radio.instate(['flag'])
```
## COMBOBOX
Drop-down list of available options.
```py
combobox_var = StringVar()
combo = ttk.Combobox(parent, textvariable=combobox_var)
combobox.get() # return combobox current value
combobox.set(value) # set combobox new value
combobox.current() # returns which item in the predefined values list is selected (0-based index of the provided list, -1 if not in the list)
#combobox will generate a bind-able <ComboboxSelected> virtual event whenever the value changes
combobox.bind('<<ComboboxSelected>>', function)
```
### PREDEFINED VALUES
```py
combobox['values'] = (value_1, value_2, ...) # provides a list of choose-able values
combobox.state(['readonly']) # restricts choose-able values to those provided with 'values' config option
# SUGGESTION: call selection clear method on value change (on ComboboxSelected event) to avoid visual oddities
```
## LISTBOX (Tk Classic)
Display list of single-line items, allows browsing and multiple selection (part og Tk classic, missing in themed Tk widgets).
```py
lstbx = Listbox(parent, height=num, listvariable=item_list:list)
# listvariable links a variable (MUST BE a list) to the listbox, each element is a item of the listbox
# manipulation of the list changes the listbox
```
### SELECTING ITEMS
```py
lstbx['selectmode'] = mode # MODE: browse (single selection), extended (multiple selection)
lstbx.curselection() # returns list of indices of selected items
# on selection change: generate event <ListboxSelect>
# often each string in the program is associated with some other data item
# keep a second list, parallel to the list of strings displayed in the listbox, which will hold the associated objects
# (association by index with .curselection() or with a dict).
```
## SCROLLBAR
```py
scroll = ttk.Scrollbar(parent, orient=direction, command=widget.view)
# ORIENT: VERTICAL, HORIZONTAL
# WIDGET.VIEW: .xview, .yview
# NEEDS ASSOCIATED WIDGET SCROLL CONFIG
widget.configure(xscrollcommand=scroll.set)
widget.configure(yscrollcommand=scroll.set)
```
## SIZEGRIP
Box in right bottom of widget, allows resize.
```py
ttk.Sizegrip(parent).grid(column=999, row=999, sticky=(S, E))
```
## TEXT (Tk Classic)
Area accepting multiple line of text.
```py
txt = Text(parent, width=num:int, height=num:int, wrap=flag) # width is character num, height is row num
# FLAG: none (no wrapping), char (wrap at every character), word (wrap at word boundaries)
txt['state'] = flag # FLAG: disabled, normal
# accepts commands xscrollcommand and yscrollcommandand and yview, xview methods
txt.see(line_num.char_num) # ensure that given line is visible (line is 1-based, char is 0-based)
txt.get( index, string) # insert string in pos index (index = line.char), 'end' is shortcut for end of text
txt.delete(start, end) # delete range of text
```
## PROGRESSBAR
Feedback about progress of lenghty operation.
```py
progbar = ttk.Progressbar(parent, orient=direction, length=num:int, value=num, maximum=num:float mode=mode)
# DIRECTION: VERTICAL, HORIZONTAL
# MODE: determinate (relative progress of completion), indeterminate (no estimate of completion)
# LENGTH: dimension in pixel
# VALUE: sets the progress, updates the bar as it changes
# MAXIMUM: total number of steps (DEFAULT: 100)
```
### DETERMINATE PROGRESS
```py
progbar.step(amount) # increment value of given amount (DEFAULT: 1.0)
```
### INDETERMINATE PROGRESS
```py
progbar.start() # starts progressbar
progbar.stop() #stoops progressbar
```
## SCALE
Provide a numeric value through direct manipulation.
```py
scale = ttk.Scale(parent, orient=DIR, length=num:int, from_=num:float, to=num:float, command=cmd)
# COMMAND: calls cmd at every scale change, appends current value to func call
scale['value'] # set or read current value
scale.set(value) # set current value
scale.get() # get current value
```
## SPINBOX
Choose numbers. The spinbox choses item from a list, arrows permit cycling lits items.
```py
spinval = StringVar()
spin = Spinbox(parent, from_=num, to=num, textvariable=spinval, increment=num, value=lst, wrap=boolean)
# INCREMENT specifies increment\decrement by arrow button
# VALUE: list of items associated with the spinbox
# WRAP: boolean value determining if value should wrap around if beyond start-end value
```
## GRID GEOMETRY MANAGER
Widgets are assigned a "column" number and a "row" number, which indicates their relative position to each other.
Column and row numbers must be integers, with the first column and row starting at 0.
Gaps in column and row numbers are handy to add more widgets in the middle of the user interface at a later time.
The width of each column (or height of each row) depends on the width or height of the widgets contained within the column or row.
Widgets can take up more than a single cell in the grid ("columnspan" and "rowspan" options).
### LAYOUT WITHIN CELL
By default, if a cell is larger than the widget contained in it, the widget will be centered within it,
both horizontally and vertically, with the master's background showing in the empty space around it.
The "sticky" option can be used to change this default behavior.
The value of the "sticky" option is a string of 0 or more of the compass directions "nsew", specifying which edges of the cell the widget should be "stuck" to.
Specifying two opposite edges means that the widget will be stretched so it is stuck to both.
Specifying "nsew" it will stick to every side.
### HANDLING RESIZE
Every column and row has a "weight" grid option associated with it, which tells it how much it should grow if there is extra room in the master to fill.
By default, the weight of each column or row is 0, meaning don't expand to fill space.
This is done using the "columnconfigure" and "rowconfigure" methods of grid.
Both "columnconfigure" and "rowconfigure" also take a "minsize" grid option, which specifies a minimum size.
### PADDING
Normally, each column or row will be directly adjacent to the next, so that widgets will be right next to each other.
"padx" puts a bit of extra space to the left and right of the widget, while "pady" adds extra space top and bottom.
A single value for the option puts the same padding on both left and right (or top and bottom),
while a two-value list lets you put different amounts on left and right (or top and bottom).
To add padding around an entire row or column, the "columnconfigure" and "rowconfigure" methods accept a "pad" option.
```py
widget.grid(column=num, row=num, columnspan=num, rowspan=num, sticky=(), padx=num, pady=num) # sticky: N, S, E, W
widget.columnconfigure(pad=num, weight=num)
widget.rowconfigure(pad=num, weight=num)
widget.grid_slaves() # returns map, list of widgets inside a master
widget.grid_info() # returns list of grid options
widget.grid_configure() # change one or more option
widget.grid_forget(slaves) # takes a list of slaves, removes slaves from grid (forgets slaves options)
widget.grid_remove(slaves) # takes a list of slaves, removes slaves from grid (remembers slaves options)
```
## WINDOWS AND DIALOGS
### CREATING TOPLEVEL WINDOW
```py
tlw = Toplevel(parent) # parent of root window, no need to grid it
window.destroy()
# can destroy every widget
# destroying parent also destroys it's children
```
### CHANGING BEHAVIOR AND STYLE
```py
# WINDOW TILE
window.title() # returns title of the window
window.title('new title') # sets title
# SIZE AND LOCATION
window.geometry(geo_specs)
'''full geometry specification: width * height +- x +- y (actual coordinates of screen)
+x --> x pixels from left edge
-x --> x pixels from right edge
+y --> y pixels from top edge
-y --> y pixels from bottom edge'''
# STACKING ORDER
# current stacking order (list from lowest to highest) --- NOT CLEANLY EXPOSED THROUGH TK API
root.tk.eval('wm stackorder ' + str(window))
# check if window is above or below
if (root.tk.eval('wm stackorder '+str(window)+' isabove '+str(otherwindow))=='1')
if (root.tk.eval('wm stackorder '+str(window)+' isbelow '+str(otherwindow))=='1')
# raise or lower windows
window.lift() # absolute position
window.lift(otherwin) # relative to other window
window.lower() # absolute position
window.lower(otherwin) # relative to other window
# RESIZE BEHAVIOR
window.resizable(boolean, boolean) # sets if resizable in width (1st param) and width (2nd param)
window.minsize(num, num) # sets min width and height
window.maxsize(num, num) # sets max width and height
# ICONIFYING AND WITHDRAWING
# WINDOW STATE: normal. iconic (iconified window), withdrawn, icon, zoomed
window.state() # returns current window state
window.state('state') # sets window state
window.iconify() # iconifies window
window.deiconify() # deiconifies window
```
### STANDARD DIALOGS
```py
# SLEETING FILE AND DIRECTORIES
# on Windows and Mac invokes underlying OS dialogs directly
from tkinter import filedialog
filename = filedialog.askopenfilename()
filename = filedialog.asksaveasfilename()
dirname = filedialog.askdirectory()
'''All of these commands produce modal dialogs, which means that the commands (and hence the program) will not continue running until the user submits the dialog.
The commands return the full pathname of the file or directory the user has chosen, or return an empty string if the user cancels out of the dialog.'''
# SELECTING COLORS
from tkinter import colorchooser
# returns HEX color code, INITIALCOLOR: exiting color, presumably to replace
colorchooser.askcolor(initialcolor=hex_color_code)
# ALERT AND COMFIRMATION DIALOGS
from tkinter import messagebox
messagebox.showinfo(title="title", message='text') # simple box with message and OK button
messagebox.showerror(title="title", message='text')
messagebox.showwarning(title="title", message='text')
messagebox.askyesno(title="title", message='text', detail='secondary text' icon='icon')
messagebor.askokcancel(message='text', icon='icon', title='title', detail='secondary text', default=button) # DEFAULT: default button, ok or cancel
messagebox.akdquestion(title="title", message='text', detail='secondary text', icon='icon')
messagebox.askretrycancel(title="title", message='text', detail='secondary text', icon='icon')
messagebox.askyesnocancel(title="title", message='text', detail='secondary text', icon='icon')
# ICON: info (default), error, question, warning
```
POSSIBLE ALERT/CONFIRMATION RETURN VALUES:
- `ok (default)` -- "ok"
- `okcancel` -- "ok" or "cancel"
- `yesno` -- "yes" or "no"
- `yesnocancel` -- "yes", "no" or "cancel"
- `retrycancel` -- "retry" or "cancel"
## SEPARATOR
```py
# horizontal or vertical line between groups of widgets
separator = ttk.Separator(parent, orient=direction)
# DIRECTION: horizontal, vertical
'''LABEL FRAME'''
# labelled frame, used to group widgets
lf = ttk.LabelFrame(parent, text='label')
'''PANED WINDOWS'''
# stack multimple resizable widgets
# panes ara adjustable (drag sash between panes)
pw = ttk.PanedWindow(parent, orient=direction)
# DIRECTION: horizontal, vertical
lf1 = ttk.LabelFrame(...)
lf2 = ttk.LabelFrame(...)
pw.add(lf1) # add widget to paned window
pw.add(lf2)
pw.insert(position, subwindow) # insert widget at given position in list of panes (0, ..., n-1)
pw.forget(subwindow) # remove widget from pane
pw.forget(position) # remove widget from pane
```
### NOTEBOOK
Allows switching between multiple pages
```py
nb = ttk.Notebook(parent)
f1 = ttk.Frame(parent, ...) # child of notebook
f2 = ttk.Frame(parent, ...)
nb.add(subwindow, text='page title', state=flag)
# TEXT: name of page, STATE: normal, dusabled (not selectable), hidden
nb.insert(position, subwindow, option=value)
nb.forget(subwindow)
nb.forget(position)
nb.tabs() # retrieve all tabs
nb.select() # return current tab
nb.select(position/subwindow) # change current tab
nb.tab(tabid, option) # retrieve tab (TABID: position or subwindow) option
nb.tab(tabid, option=value) # change tab option
```
#### FONTS, COLORS, IMAGES
#### NAMED FONTS
Creation of personalized fonts
```py
from tkinter import font
font_name = font.Font(family='font_family', size=num, weight='bold/normal', slant='roman/italic', underline=boolean, overstrike=boolean)
# FAMILY: Courier, Times, Helvetica (support guaranteed)
font.families() # all avaiable font families
```
#### COLORS
Specified w/ HEX RGB codes.
#### IMAGES
imgobj = PhotoImage(file='filename')
label['image'] = imgobj
#### IMAGES W/ Pillow
```py
from PIL import ImageTk, Image
myimg = ImageTk.PhotoImage(Image.open('filename'))
```

85
docs/python/logging.md Normal file
View file

@ -0,0 +1,85 @@
# Logging Module
## Configuration
```python
# basic configuration for the logging system
logging.basicConfig(filename="relpath", level=logging.LOG_LEVEL, format=f"message format", **kwargs)
# DATEFMT: Use the specified date/time format, as accepted by time.strftime().
# create a logger with a name (useful for having multiple loggers)
logger = logging.getLogger(name="logger name")
logger.level # LOG_LEVEL for this logger
# disable all logging calls of severity level and below
# alternative to basicConfig(level=logging.LOG_LEVEL)
logging.disable(level=LOG_LEVEL)
```
### Format (`basicConfig(format="")`)
| Attribute name | Format | Description |
|----------------|-------------------|-------------------------------------------------------------------------------------------|
| asctime | `%(asctime)s` | Human-readable time when the LogRecord was created. Modified by `basicConfig(datefmt="")` |
| created | `%(created)f` | Time when the LogRecord was created (as returned by `time.time()`). |
| filename | `%(filename)s` | Filename portion of pathname. |
| funcName | `%(funcName)s` | Name of function containing the logging call. |
| levelname | `%(levelname)s` | Text logging level for the message. |
| levelno | `%(levelno)s` | Numeric logging level for the message. |
| lineno | `%(lineno)d` | Source line number where the logging call was issued (if available). |
| message | `%(message)s` | The logged message, computed as `msg % args`. |
| module | `%(module)s` | Module (name portion of filename). |
| msecs | `%(msecs)d` | Millisecond portion of the time when the LogRecord was created. |
| name | `%(name)s` | Name of the logger used to log the call. |
| pathname | `%(pathname)s` | Full pathname of the source file where the logging call was issued (if available). |
| process | `%(process)d` | Process ID (if available). |
| processName | `%(processName)s` | Process name (if available). |
| thread | `%(thread)d` | Thread ID (if available). |
| threadName | `%(threadName)s` | Thread name (if available). |
### Datefmt (`basicConfig(datefmt="")`)
| Directive | Meaning |
|-----------|------------------------------------------------------------------------------------------------------------------------------|
| `%a` | Locale's abbreviated weekday name. |
| `%A` | Locale's full weekday name. |
| `%b` | Locale's abbreviated month name. |
| `%B` | Locale's full month name. |
| `%c` | Locale's appropriate date and time representation. |
| `%d` | Day of the month as a decimal number [01,31]. |
| `%H` | Hour (24-hour clock) as a decimal number [00,23]. |
| `%I` | Hour (12-hour clock) as a decimal number [01,12]. |
| `%j` | Day of the year as a decimal number [001,366]. |
| `%m` | Month as a decimal number [01,12]. |
| `%M` | Minute as a decimal number [00,59]. |
| `%p` | Locale's equivalent of either AM or PM. |
| `%S` | Second as a decimal number [00,61]. |
| `%U` | Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. |
| `%w` | Weekday as a decimal number [0(Sunday),6]. |
| `%W` | Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. |
| `%x` | Locale's appropriate date representation. |
| `%X` | Locale's appropriate time representation. |
| `%y` | Year without century as a decimal number [00,99]. |
| `%Y` | Year with century as a decimal number. |
| `%z` | Time zone offset indicating a positive or negative time difference from UTC/GMT of the form +HHMM or -HHMM [-23:59, +23:59]. |
| `%Z` | Time zone name (no characters if no time zone exists). |
| `%%` | A literal '%' character. |
## Logs
Log Levels (Low To High):
- default: `0`
- debug: `10`
- info: `20`
- warning: `30`
- error: `40`
- critical: `50`
```python
logging.debug(msg) # Logs a message with level DEBUG on the root logger
logging.info(msg) # Logs a message with level INFO on the root logger
logging.warning(msg) # Logs a message with level WARNING on the root logger
logging.error(msg) # Logs a message with level ERROR on the root logger
logging.critical(msg) # Logs a message with level CRITICAL on the root logger
```

1531
docs/python/python.md Normal file

File diff suppressed because it is too large Load diff

52
docs/python/shutil.md Normal file
View file

@ -0,0 +1,52 @@
# Shutil Module
High-level file operations
```python
# copy file src to fil dst, return dst in most efficient way
shutil.copyfile(src, dst)
# dst MUST be complete target name
# if dst already exists it will be overwritten
# copy file src to directory dst, return path to new file
shutil.copy(src, dst)
# Recursively copy entire dir-tree rooted at src to directory named dst
# return the destination directory
shutil.copytree(src, dst, dirs_exist_ok=False)
# DIRS_EXIST_OK: {bool} -- dictates whether to raise an exception in case dst
# or any missing parent directory already exists
# delete an entire directory tree
shutil.rmtree(path, ignore_errors=False, onerror=None)
# IGNORE_ERROR: {bool} -- if true errors (failed removals) will be ignored
# ON_ERROR: handler for removal errors (if ignore_errors=False or omitted)
# recursively move file or directory (src) to dst, return dst
shutil.move(src, dst)
# if the destination is an existing directory, then src is moved inside that directory.
# if the destination already exists but is not a directory,
# it may be overwritten depending on os.rename() semantics
# used to rename files
# change owner user and/or group of the given path
shutil.chown(path, user=None, group=None)
# user can be a system user name or a uid; the same applies to group.
# At least one argument is required
# create archive file and return its name
shutil.make_archive(base_name, format, [root_dir, base_dir])
# BASE_NAME: {string} -- name of the archive, including path, excluding extension
# FROMAT: {zip, tar, gztar, bztar, xztar} -- archive format
# ROOT_DIR: {path} -- root directory of archive (location of archive)
# BASE_DIR: {path} -- directory where the archiviation starts
# unpack an archive
shutil.unpack_archive(filename, [extract_dir, format])
# FILENAME: full path of archive
# EXTRACT_DIR: {path} -- directory to unpack into
# FORMAT: {zip, tar, gztar, bztar, xztar} -- archive format
# return disk usage statistics as Namedtuple w/ attributes total, used, free
shutil.disk_usage(path)
```

43
docs/python/smtplib.md Normal file
View file

@ -0,0 +1,43 @@
# SMTPlib Module
```python
import smtplib
# SMTP instance that encapsulates a SMTP connection
# If the optional host and port parameters are given, the SMTP connect() method is called with those parameters during initialization.
s = smtplib.SMTP(host="host_smtp_address", port="smtp_service_port", **kwargs)
s = smtplib.SMTP_SSL(host="host_smtp_address", port="smtp_service_port", **kwargs)
# An SMTP_SSL instance behaves exactly the same as instances of SMTP.
# SMTP_SSL should be used for situations where SSL is required from the beginning of the connection
# and using starttls() is not appropriate.
# If host is not specified, the local host is used.
# If port is zero, the standard SMTP-over-SSL port (465) is used.
SMTP.connect(host='localhost', port=0)
#Connect to a host on a given port. The defaults are to connect to the local host at the standard SMTP port (25). If the hostname ends with a colon (':') followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This method is automatically invoked by the constructor if a host is specified during instantiation. Returns a 2-tuple of the response code and message sent by the server in its connection response.
SMTP.verify(address) # Check the validity of an address on this server using SMTP VRFY
SMTP.login(user="full_user_mail", password="user_password") # Log-in on an SMTP server that requires authentication
SMTP.SMTPHeloError # The server didn't reply properly to the HELO greeting
SMTP.SMTPAuthenticationError # The server didn't accept the username/password combination.
SMTP.SMTPNotSupportedError # The AUTH command is not supported by the server.
SMTP.SMTPException # No suitable authentication method was found.
SMTP.starttls(keyfile=None, certfile=None, **kwargs) # Put the SMTP connection in TLS (Transport Layer Security) mode. All SMTP commands that follow will be encrypted
# from_addr & to_addrs are used to construct the message envelope used by the transport agents. sendmail does not modify the message headers in any way.
# msg may be a string containing characters in the ASCII range, or a byte string. A string is encoded to bytes using the ascii codec, and lone \r and \n characters are converted to \r\n characters. A byte string is not modified.
SMTP.sendmail(from_addr, to_addrs, msg, **kwargs)
# from_addr: {string} -- RFC 822 from-address string
# ro_addrs: {string, list of strings} -- list of RFC 822 to-address strings
# msg: {string} -- message string
# This is a convenience method for calling sendmail() with the message represented by an email.message.Message object.
SMTP.send_message(msg, from_addr=None, to_addrs=None, **kwargs)
# from_addr: {string} -- RFC 822 from-address string
# ro_addrs: {string, list of strings} -- list of RFC 822 to-address strings
# msg: {email.message.Message object} -- message string
SMTP.quit() # Terminate the SMTP session and close the connection. Return the result of the SMTP QUIT command
```

Some files were not shown because too many files have changed in this diff Show more