Compare commits
104 Commits
Author | SHA1 | Date | |
---|---|---|---|
36226959dc | |||
1143e5a8a4 | |||
8935f63123 | |||
0bae42c588 | |||
3050308941 | |||
a06158cf6f | |||
1504fdb24c | |||
8e8c7a4392 | |||
42550948e0 | |||
9de03fb9f6 | |||
7b255c8120 | |||
140d909b1a | |||
88f960ab5d | |||
75f7b618dd | |||
d94e554834 | |||
8c8d9a6013 | |||
4c48ec93b2 | |||
cd926cfc55 | |||
43d9795bba | |||
a220f0e391 | |||
8fb92e80d3 | |||
d3518c1783 | |||
79ca688241 | |||
e4b55280ee | |||
a267416a9c | |||
a98210460d | |||
a44dab9704 | |||
be034057e0 | |||
f163583f88 | |||
27e8ea7025 | |||
b34afce702 | |||
53687f6a08 | |||
a3ff13b232 | |||
6dcf5b16ed | |||
35e44680be | |||
51399e0204 | |||
cd71ccb6cc | |||
e6acac3e4b | |||
7ce1a180ee | |||
139c9dee52 | |||
9d1f219245 | |||
22d66449e2 | |||
3f9ad4310c | |||
01f6aaa730 | |||
155118d3ac | |||
7b4c3665ce | |||
83150289e6 | |||
62330f1706 | |||
7678bf8d12 | |||
ccb1ba8104 | |||
e3929e4bc1 | |||
5a6748ef9e | |||
32237418c2 | |||
cc7735ea9e | |||
61a0adabb4 | |||
6cfd44a556 | |||
7778d7197e | |||
7e84cde9d7 | |||
87a5fe7b0a | |||
32959a6e75 | |||
d6d03f3980 | |||
3a3d7a4427 | |||
04cd1eaf29 | |||
7f2ca509de | |||
c4ae095ac3 | |||
a3f27a2121 | |||
ff8ca65304 | |||
4004e79104 | |||
212a945393 | |||
5a24555c5c | |||
4d8d7a6d66 | |||
982b9bc284 | |||
0e6e7e45f4 | |||
4d10455cf1 | |||
1e2c166ed8 | |||
|
0904405a46 | ||
|
e476fca157 | ||
|
1477e288e2 | ||
|
0c3234c920 | ||
|
424094baa6 | ||
|
6a88430c73 | ||
|
e9323816a0 | ||
|
5787cb7516 | ||
|
4767c4ab44 | ||
|
3fe16ce15e | ||
|
a49ef27863 | ||
|
0f2c3a614b | ||
|
73decfe6af | ||
|
f05dc64514 | ||
|
77fb147f51 | ||
|
0121f7414b | ||
|
b482c09959 | ||
|
5d1f54052c | ||
|
312adb6ff1 | ||
|
9295bd13f1 | ||
|
7ea926c870 | ||
|
07c05244e7 | ||
|
adee128e51 | ||
|
e06c62139a | ||
|
8d12e4a56c | ||
|
02fc841017 | ||
|
35376edd14 | ||
|
21fe571211 | ||
|
027a0a402c |
2
.gitignore
vendored
2
.gitignore
vendored
@ -1 +1,3 @@
|
||||
*.swp
|
||||
.obsidian/
|
||||
.vscode/
|
||||
|
1
.vscode/database.json
vendored
1
.vscode/database.json
vendored
@ -1 +0,0 @@
|
||||
{}
|
1
AutoSSH.md
Normal file
1
AutoSSH.md
Normal file
@ -0,0 +1 @@
|
||||
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -R 22721:localhost:22 pt@hptrow.me
|
@ -1,2 +1,2 @@
|
||||
FCC - fixed charge coverage ratio - looks at cash flow as compared to cash require dto fulfill debt payments
|
||||
FCC - fixed charge coverage ratio - looks at cash flow as compared to cash require dto fulfill debt payments
|
||||
SLR - senior debt leverage ratio - senior debt to ebitda
|
@ -1,64 +1,64 @@
|
||||
Deriving The Trial Balance
|
||||
=============================================================================================================
|
||||
* Entries and reocniliations
|
||||
* Payroll
|
||||
* Data: Retain all payroll data in a `database` to build entries
|
||||
* Mappings: Configure `Paycom GL Interface`
|
||||
* `401k`: book disbursements and reconcile to Paycom withholdings
|
||||
* `FSA`: book FSA funding entries and reconcile to Paycom withholdings
|
||||
* Debt & Cash
|
||||
* Data: retain all PNC information available in a `database` to build entries (cash, revolver, debt)
|
||||
* Book all PNC `loan activity`
|
||||
* Book interest on `notes`
|
||||
* Reconcile all balance sheet `debt`
|
||||
* Book `interest rate swap` valuation
|
||||
* Bank Rec:
|
||||
* book entry to break out `freight checks`
|
||||
* book entries to clean up missed `fees`
|
||||
* book entries to deal with `miscelaneous discrepencies`
|
||||
* book entry to classify `outstanding checks` as liabilities
|
||||
* Intercompany Activity
|
||||
* Support `transfer pricing` entry
|
||||
* Book `consolidating` entries
|
||||
* Book `currency translation adjustment` for consolidated USD trial balance
|
||||
* Reconcile `CTA` & `Equity`
|
||||
* Reclassify any `intercompany liabilites` out of the trade accounts
|
||||
* Validate that `intercompany balances` are eliminated from consolidated trial balance
|
||||
* Other Balance Sheet Items
|
||||
* Book and reconcile amortization of `intangibles`
|
||||
* Book and reconcile amortization of `defered financing costs`
|
||||
* Book RSM determined `tax provision` and current year `tax accrual`
|
||||
* CMS Module Corrections
|
||||
* book entry to fix `virtual sales`
|
||||
* book entry to fix `credits`
|
||||
* furnish a report to the plants breaking out the `book to perpetual` issues
|
||||
* sales timing and valuation issues
|
||||
* cost roll impact
|
||||
* production ledger issues
|
||||
* voucher issues
|
||||
* issues with transfers
|
||||
* issues with returns
|
||||
* Configuration
|
||||
* Module accounts (sales, inventory, production, manual adjustments, AP, AR, intecompany)
|
||||
* Chart of Accounts
|
||||
* EBITDA flags
|
||||
* consolidation flags
|
||||
* consolidation heirarchy
|
||||
* financial statement lines
|
||||
* currency indicator
|
||||
|
||||
Interpreting The Trial Balance
|
||||
=========================================================================================================
|
||||
* Rebuild trial balance into alternate financial statement formats
|
||||
* Rebuilt subledger that matches original ledger
|
||||
* Rebuild production subledger that does not match original
|
||||
* Sales Matrix
|
||||
* A large number of reports that I can't even list but are maintained [here](https://bitbucket.org/hccompanies/hc_ubm/src/master/)
|
||||
|
||||
Forecasting
|
||||
=============================
|
||||
* Product Strucutre Explosion Logic
|
||||
* global scale cost change estimates
|
||||
* production plans
|
||||
* inventory forecasts
|
||||
Deriving The Trial Balance
|
||||
===============================================================
|
||||
* Entries and reocniliations
|
||||
* Payroll
|
||||
* Data: Retain all payroll data in a `database` to build entries
|
||||
* Mappings: Configure `Paycom GL Interface`
|
||||
* `401k`: book disbursements and reconcile to Paycom withholdings
|
||||
* `FSA`: book FSA funding entries and reconcile to Paycom withholdings
|
||||
* Debt & Cash
|
||||
* Data: retain all PNC information available in a `database` to build entries (cash, revolver, debt)
|
||||
* Book all PNC `loan activity`
|
||||
* Book interest on `notes`
|
||||
* Reconcile all balance sheet `debt`
|
||||
* Book `interest rate swap` valuation
|
||||
* Bank Rec:
|
||||
* book entry to break out `freight checks`
|
||||
* book entries to clean up missed `fees`
|
||||
* book entries to deal with `miscelaneous discrepencies`
|
||||
* book entry to classify `outstanding checks` as liabilities
|
||||
* Intercompany Activity
|
||||
* Support `transfer pricing` entry
|
||||
* Book `consolidating` entries
|
||||
* Book `currency translation adjustment` for consolidated USD trial balance
|
||||
* Reconcile `CTA` & `Equity`
|
||||
* Reclassify any `intercompany liabilites` out of the trade accounts
|
||||
* Validate that `intercompany balances` are eliminated from consolidated trial balance
|
||||
* Other Balance Sheet Items
|
||||
* Book and reconcile amortization of `intangibles`
|
||||
* Book and reconcile amortization of `defered financing costs`
|
||||
* Book RSM determined `tax provision` and current year `tax accrual`
|
||||
* CMS Module Corrections
|
||||
* book entry to fix `virtual sales`
|
||||
* book entry to fix `credits`
|
||||
* furnish a report to the plants breaking out the `book to perpetual` issues
|
||||
* sales timing and valuation issues
|
||||
* cost roll impact
|
||||
* production ledger issues
|
||||
* voucher issues
|
||||
* issues with transfers
|
||||
* issues with returns
|
||||
* Configuration
|
||||
* Module accounts (sales, inventory, production, manual adjustments, AP, AR, intecompany)
|
||||
* Chart of Accounts
|
||||
* EBITDA flags
|
||||
* consolidation flags
|
||||
* consolidation heirarchy
|
||||
* financial statement lines
|
||||
* currency indicator
|
||||
|
||||
Interpreting The Trial Balance
|
||||
=========================================================================================================
|
||||
* Rebuild trial balance into alternate financial statement formats
|
||||
* Rebuilt subledger that matches original ledger
|
||||
* Rebuild production subledger that does not match original
|
||||
* Sales Matrix
|
||||
* A large number of reports that I can't even list but are maintained [here](https://bitbucket.org/hccompanies/hc_ubm/src/master/)
|
||||
|
||||
Forecasting
|
||||
=============================
|
||||
* Product Strucutre Explosion Logic
|
||||
* global scale cost change estimates
|
||||
* production plans
|
||||
* inventory forecasts
|
||||
* Sales forecast tool
|
Binary file not shown.
@ -1,21 +1,21 @@
|
||||
|
||||
Only applies to items that exist in both sets of data
|
||||
|
||||
**Change in Price**
|
||||
|
||||
( P₂ - P₁ ) Q₂
|
||||
|
||||
**Change in Quantity**
|
||||
|
||||
( Q₂ - Q₁ ) P₁
|
||||
|
||||
_To further break out change in quantity_
|
||||
|
||||
|
||||
Change in Quantity - _Volume Related_
|
||||
|
||||
Q₂ ( Q₁ / Σ ( Q₁ ) ) - Q₁
|
||||
|
||||
Change in Quantity - _Mix Related_
|
||||
|
||||
|
||||
Only applies to items that exist in both sets of data
|
||||
|
||||
**Change in Price**
|
||||
|
||||
( P₂ - P₁ ) Q₂
|
||||
|
||||
**Change in Quantity**
|
||||
|
||||
( Q₂ - Q₁ ) P₁
|
||||
|
||||
_To further break out change in quantity_
|
||||
|
||||
|
||||
Change in Quantity - _Volume Related_
|
||||
|
||||
Q₂ ( Q₁ / Σ ( Q₁ ) ) - Q₁
|
||||
|
||||
Change in Quantity - _Mix Related_
|
||||
|
||||
Q₂ - Q₂ ( Q₁ / Σ ( Q₁ ) )
|
2
apache.md
Normal file
2
apache.md
Normal file
@ -0,0 +1,2 @@
|
||||
to enable a module like php7.2 you would
|
||||
`sudo a2enmod php7.2`
|
3
auth.md
Normal file
3
auth.md
Normal file
@ -0,0 +1,3 @@
|
||||
* https://github.com/ory/kratos
|
||||
* https://github.com/keycloak/keycloak
|
||||
* https://github.com/supertokens/supertokens-core
|
3
badges.md
Normal file
3
badges.md
Normal file
@ -0,0 +1,3 @@
|
||||
shields.io
|
||||
|
||||
[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
|
6
bash.md
6
bash.md
@ -1,3 +1,5 @@
|
||||
https://wiki.bash-hackers.org/
|
||||
|
||||
|
||||
Update PostgreSQL
|
||||
------------------------------------------------------------------------------------------------------------
|
||||
@ -64,3 +66,7 @@ add client
|
||||
* host setup //etc/ssh/sshd_config to allow passwords and restart
|
||||
* client uses `ssh-copy-id host_address -p port_num` to move the key to the host
|
||||
* client uses `ssh` to login
|
||||
|
||||
rename files in the current director
|
||||
------------------------------------------------------------
|
||||
rename 's/find_this_in_the_file_name/replace_with_this/g' *
|
||||
|
1
certbot.md
Normal file
1
certbot.md
Normal file
@ -0,0 +1 @@
|
||||
sudo certbot --nginx -d mastodon.hptrow.me
|
33
curl.md
Normal file
33
curl.md
Normal file
@ -0,0 +1,33 @@
|
||||
to curl a file from onedrive or sharepoint
|
||||
------------------------------------------------------------------------------
|
||||
https://askubuntu.com/questions/1205418/wget-or-curl-gives-403forbidden-while-downloading-file-from-onedrive-for-busine
|
||||
|
||||
need to first create access to the via link that anyone can use that has it
|
||||
|
||||
Google Chrome as well as Mozilla Firerfox both provide an option to copy download link specifically for cURL. This option will generate cURL with all required things such as User agent for downloading things from the side. To get that,
|
||||
|
||||
1. Open the URL in either of the browser.
|
||||
2. Open Developer options using Ctrl+Shift+I.
|
||||
3. Go to Network tab.
|
||||
4. Now click on download. Saving file isn't required. We only need the network activity while browser requests the file from the server.
|
||||
5. A new entry will appear which would look like "download.aspx?...".
|
||||
6. Right click on that and Copy → Copy as cURL.
|
||||
7. Paste the copied content directly in the terminal and append --output file.extension to save the content in file.extension since terminal isn't capable of showing binary data.
|
||||
|
||||
An example of the command:
|
||||
```
|
||||
curl 'https://company-my.sharepoint.com/personal/path/_layouts/15/download.aspx?SourceUrl=
|
||||
%2Fpersonal%2Fsome%5Fpath%5Fin%2Ffile' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux
|
||||
x86_64; rv:73.0) Gecko/20100101 Firefox/73.0' -H 'Accept: text/html,application/xhtml+xml,
|
||||
application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5'
|
||||
--compressed -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Referer: https://company-my
|
||||
.sharepoint.com/personal/path/_layouts/15/onedrive.aspx?id=%2Fpersonal%2Fagain%5Fa%5Fpath%2F
|
||||
file&parent=%2Fpersonal%2Fpath%5Fagain%5Fin%2&originalPath=somegibberishpath' -H
|
||||
'Cookie: MicrosoftApplicationsTelemetryDeviceId=someid;
|
||||
MicrosoftApplicationsTelemetryFirstLaunchTime=somevalue;
|
||||
rtFa=rootFederationAuthenticationCookie; FedAuth=againACookie; CCSInfo=gibberishText;
|
||||
FeatureOverrides_enableFeatures=; FeatureOverrides_disableFeatures=' -H
|
||||
'Upgrade-Insecure-Requests: 1' -H 'If-None-Match: "{some value},2"' -H 'TE: Trailers'
|
||||
--output file.extension
|
||||
```
|
||||
|
6
data_viz.md
Normal file
6
data_viz.md
Normal file
@ -0,0 +1,6 @@
|
||||
|
||||
https://www.toptal.com/designers/data-visualization/data-visualization-tools
|
||||
|
||||
need to look at Grafana as option for quote review
|
||||
|
||||
visualizaing graphs [sigmajs](https://www.sigmajs.org/)
|
4
db2.md
4
db2.md
@ -1,3 +1,3 @@
|
||||
alter existing column type
|
||||
|
||||
alter existing column type
|
||||
|
||||
`ALTER TABLE RLARP.OSMFS ALTER COLUMN "ITER" SET DATA TYPE VARCHAR(500)`
|
45
deno.md
Normal file
45
deno.md
Normal file
@ -0,0 +1,45 @@
|
||||
making a basic API in deno
|
||||
https://blog.logrocket.com/creating-your-first-rest-api-with-deno-and-postgres/
|
||||
|
||||
install deno:
|
||||
curl -fsSL https://deno.land/x/install/install.sh | sh
|
||||
|
||||
a basic api:
|
||||
```
|
||||
import { Application, Router } from 'https://deno.land/x/oak/mod.ts';
|
||||
import { Client } from "https://deno.land/x/postgres@v0.17.0/mod.ts";
|
||||
|
||||
const app = new Application();
|
||||
const router = new Router();
|
||||
|
||||
// Configure database connection
|
||||
const client = new Client({
|
||||
hostname: 'usmidsap02',
|
||||
port: 5432,
|
||||
user: 'api',
|
||||
password: '',
|
||||
database: 'ubm',
|
||||
});
|
||||
|
||||
await client.connect();
|
||||
|
||||
// Define a route to retrieve values from the database
|
||||
router.get('/', async (ctx) => {
|
||||
ctx.response.body = "live";
|
||||
});
|
||||
// Define a route to retrieve values from the database
|
||||
router.get('/api/data', async (ctx) => {
|
||||
const result = await client.queryObject("SELECT * FROM rlarp.pl LIMIT 10");
|
||||
console.log(result.rows); // [{id: 1, name: 'Carlos'}, {id: 2, name: 'Johnru'}, ...]
|
||||
//const result = await client.query('SELECT 1');
|
||||
ctx.response.body = result.rows;
|
||||
});
|
||||
|
||||
app.use(router.routes());
|
||||
app.use(router.allowedMethods());
|
||||
|
||||
// Start the server
|
||||
console.log('Server is running on http://localhost:8085');
|
||||
await app.listen({ port: 8085 });
|
||||
|
||||
```
|
31
docker.md
Normal file
31
docker.md
Normal file
@ -0,0 +1,31 @@
|
||||
used to install docker:
|
||||
https://docs.docker.com/engine/install/ubuntu/
|
||||
|
||||
~~~
|
||||
Install using the repository
|
||||
Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.
|
||||
|
||||
Set up the repository
|
||||
Update the apt package index and install packages to allow apt to use a repository over HTTPS:
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install \
|
||||
ca-certificates \
|
||||
curl \
|
||||
gnupg \
|
||||
lsb-release
|
||||
Add Docker’s official GPG key:
|
||||
|
||||
sudo mkdir -p /etc/apt/keyrings
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||
Use the following command to set up the repository:
|
||||
|
||||
echo \
|
||||
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
Install Docker Engine
|
||||
Update the apt package index, and install the latest version of Docker Engine, containerd, and Docker Compose, or go to the next step to install a specific version:
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
||||
~~~
|
@ -1,11 +1,11 @@
|
||||
dotnet new console -n "name of directory or project"
|
||||
|
||||
dotnet build
|
||||
|
||||
create exe targeting a runtime: create an executable if not already exists and build dll in bin/Release/win10-x64
|
||||
--------------------------------------------
|
||||
dotnet publish -c Release -r win10-x64
|
||||
|
||||
dotnet publish -c Release -f netcoreapp2.1
|
||||
|
||||
dotnet new console -n "name of directory or project"
|
||||
|
||||
dotnet build
|
||||
|
||||
create exe targeting a runtime: create an executable if not already exists and build dll in bin/Release/win10-x64
|
||||
--------------------------------------------
|
||||
dotnet publish -c Release -r win10-x64
|
||||
|
||||
dotnet publish -c Release -f netcoreapp2.1
|
||||
|
||||
`dotnet restore` -> update/sync packages
|
@ -1,3 +0,0 @@
|
||||
iredmail
|
||||
mailinabox
|
||||
mailcow
|
@ -1,4 +1,4 @@
|
||||
|
||||
https://github.com/forbesmyester/esqlate
|
||||
|
||||
|
||||
https://github.com/forbesmyester/esqlate
|
||||
|
||||
builds little forms out of sql
|
66
git.md
66
git.md
@ -1,27 +1,39 @@
|
||||
|
||||
Branches
|
||||
============================================
|
||||
|
||||
|
||||
### Adding Branches ###
|
||||
* local: `git checkout -b <branch>`
|
||||
* remote: `git push --set-upstream <remote> <branch>`
|
||||
* track remote: `git checkout --track <origin>/<branch>`
|
||||
|
||||
### Deleting Branches ###
|
||||
|
||||
* local: `git branch -d <name>`
|
||||
* remote: `git push -d <remote> <name>`
|
||||
* realize remote deletes: `git remote prune <remote>`
|
||||
|
||||
### Non-Standard Activities ###
|
||||
|
||||
* merge only a single file into another branch `git checkout <branch> -- <file>`
|
||||
* delete from repo and file system `git rm <file>`
|
||||
* set current branch to track remote `git branch -u <origin>/<branch>`
|
||||
|
||||
Config
|
||||
=============================================
|
||||
|
||||
* set line ending behaviour `git config --global core.autocrlf true`
|
||||
* store credentials `git config credential.helper store` or `git config credential.helper cache`
|
||||
|
||||
Branches
|
||||
============================================
|
||||
|
||||
|
||||
### Adding Branches
|
||||
|
||||
* local: `git checkout -b <branch>`
|
||||
* remote: `git push --set-upstream <remote> <branch>`
|
||||
* track remote: `git checkout --track <origin>/<branch>`
|
||||
|
||||
### Deleting Branches
|
||||
|
||||
* local: `git branch -d <name>`
|
||||
* remote: `git push -d <remote> <name>`
|
||||
* realize remote deletes: `git remote prune <remote>`
|
||||
|
||||
### Non-Standard Activities
|
||||
|
||||
* merge only a single file into another branch `git checkout <branch> -- <file>`
|
||||
* delete from repo and file system `git rm <file>`
|
||||
* set current branch to track remote `git branch -u <origin>/<branch>`
|
||||
|
||||
Config
|
||||
=============================================
|
||||
|
||||
* set line ending behaviour `git config --global core.autocrlf true`
|
||||
this should force git to checkout code using OS default line endings
|
||||
* store credentials `git config credential.helper store` or `git config credential.helper cache`
|
||||
|
||||
.gitignore
|
||||
================================================
|
||||
|
||||
[Git - gitignore Documentation (git-scm.com)](https://git-scm.com/docs/gitignore)
|
||||
|
||||
* `.obsidian/` ignores the .obsidian directory
|
||||
|
||||
to untrack files `git rm --cached .\.vscode\`
|
||||
|
||||
|
10
grant.pg.sql
Normal file
10
grant.pg.sql
Normal file
@ -0,0 +1,10 @@
|
||||
---------access to schema-----------------------------------------------------------------------------------------------------------------------------
|
||||
GRANT USAGE ON SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" TO api;
|
||||
---------access to objects in schema------------------------------------------------------------------------------------------------------------------
|
||||
GRANT SELECT /*, UPDATE, INSERT, DELETE*/ ON ALL TABLES IN SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" TO api;
|
||||
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" TO api;
|
||||
GRANT USAGE ON ALL SEQUENCES IN SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" TO api;
|
||||
---------access to objects in schema going forward----------------------------------------------------------------------------------------------------
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" GRANT SELECT/*, UPDATE, INSERT, DELETE*/ ON TABLES TO api;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" GRANT USAGE ON SEQUENCES TO api;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA rlarp,lgdat,pricequote,lgpgm,import,"CMS.CUSLG" GRANT EXECUTE ON FUNCTIONS TO api;
|
46
hard_disks.md
Normal file
46
hard_disks.md
Normal file
@ -0,0 +1,46 @@
|
||||
|
||||
## RAID edits
|
||||
* Enter the system setup menu at boot time, then you also have to press additional keys to get to the RAID screen
|
||||
* create a new virtual disk and assign the new disks, then initialize the VD
|
||||
|
||||
## Mounting the new VD
|
||||
* find the disk with `fdisk -l | grep '^Disk'`
|
||||
* create a partition table
|
||||
* `fdisk /dev/sdb`
|
||||
* `p` to list partition table if any
|
||||
* `n` to create a new partition
|
||||
* `w` to write the new partition
|
||||
* format the new partition with `mkfs.ext4 /dev/sdb1`
|
||||
* create an access folder, maybe at //mnt/backup
|
||||
* run the mount `mount /dev/sdb1 /mnt/backup`
|
||||
* edit fstab by adding
|
||||
* using device name
|
||||
```
|
||||
//dev/sdb1 /mnt/backup ext4 defaults 1 2
|
||||
```
|
||||
* using UUID (do `sudo blkid` to get the UUID)
|
||||
```
|
||||
UUID="86e81045-a0dc-4881-8ddb-5ef25834ea5a" /datadrive xfs defaults,nofail 1 2
|
||||
```
|
||||
* somehow in the process an new systemctl service module is loaded based on fstab at runs at boot
|
||||
|
||||
## lvm (logical volume manager)
|
||||
`vgs` to list volume groups
|
||||
`vgdisplay` to show all info for a volume group
|
||||
`lvs` to show logical volumes
|
||||
|
||||
Given a logical volume `group` you can extend the size of a `logical volume` inside that group.
|
||||
```
|
||||
sudo lvextend -L+100G /dev/ubuntu-vg/ubuntu-lv
|
||||
```
|
||||
|
||||
now if you do `lsblk` the size of the disk has grown but filesystem still needs to be extended as per `df`
|
||||
|
||||
use `resize2fs` to expand the file system to take up all the disk space
|
||||
```
|
||||
ptrowbridge@usmidsap01:/var/log/postgresql$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv
|
||||
resize2fs 1.46.5 (30-Dec-2021)
|
||||
Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required
|
||||
old_desc_blocks = 13, new_desc_blocks = 25
|
||||
The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 52428800 (4k) blocks long.
|
||||
```
|
@ -1,22 +0,0 @@
|
||||
PDMN24-1 Maintain Product Structure
|
||||
IVMN02-4 Maint Part/Plant
|
||||
IVMN14-7 Costing Sheet
|
||||
PDMN31-1 WO Production Reporting
|
||||
PDMN06 Maintain WO
|
||||
|
||||
|
||||
`lgdat.mrprct` is output of nightly MRP job, show actual with `PD`, `PO` flags and suggested in `MRP` flag
|
||||
`lgdat.mrpdmd` is output if nightly MRP (shoudl be very close to `inva`)
|
||||
|
||||
|
||||
|CLTIER|CLDESC |
|
||||
|------|------------------------------|
|
||||
|B |BASE |
|
||||
|C |CUSTOM |
|
||||
|E |ECOGROW |
|
||||
|O |ORGANIC |
|
||||
|M |PREMIUM CORE (C) |
|
||||
|L |PRINCIPAL CORE (B) |
|
||||
|P |PROGRAM |
|
||||
|T |TRADITIONAL CORE (A) |
|
||||
|W |WAXTOUGH |
|
1
html.md
Normal file
1
html.md
Normal file
@ -0,0 +1 @@
|
||||
https://bashooka.com/html/free-drag-drop-html-website-builders/
|
@ -1,18 +0,0 @@
|
||||
dbeaver
|
||||
vs code
|
||||
bash
|
||||
vundle
|
||||
|
||||
npgsql
|
||||
pspg
|
||||
postgresql apt repo
|
||||
pgadmin
|
||||
windows postgres
|
||||
|
||||
nodejs
|
||||
|
||||
|
||||
power bi
|
||||
ms data gateway
|
||||
excel add-in
|
||||
|
14
java.md
Normal file
14
java.md
Normal file
@ -0,0 +1,14 @@
|
||||
install gradle:
|
||||
https://linuxhint.com/installing_gradle_ubuntu/
|
||||
|
||||
wget -c https://services.gradle.org/distributions/gradle-7.5.1-bin.zip -P /tmp
|
||||
sudo unzip -d /opt/gradle /tmp/gradle-7.5.1-bin.zip
|
||||
|
||||
create sample application with gradle:
|
||||
https://docs.gradle.org/current/samples/sample_building_java_applications.html
|
||||
|
||||
install jdk18 if not in apt
|
||||
https://askubuntu.com/questions/1421306/how-to-install-openjdk-18
|
||||
* wget the tar
|
||||
* unpack to /opt
|
||||
* set the bin folder in the JAVA_HOME ev
|
16
jupyter.md
Normal file
16
jupyter.md
Normal file
@ -0,0 +1,16 @@
|
||||
|
||||
Install jupyter lab via pip
|
||||
|
||||
install R kernel for jupyter to use
|
||||
* `sudo R`
|
||||
* `install.packages('IRkernel')` (most likely have to run R under sudo)
|
||||
* `IRkernel::installspec()` (don't use sudo R)
|
||||
|
||||
run on network:
|
||||
`jupyter notebook --ip 10.0.10.15 --port 8888`
|
||||
|
||||
|
||||
basic packages:
|
||||
* ggplot2, plyr, ggExtra, scales
|
||||
|
||||
issues with connectin to kernel, atempting update of all packages `update.packages(ask = FALSE)`
|
@ -1,14 +0,0 @@
|
||||
|
||||
install R kernel for jupyter to use
|
||||
* `sudo R`
|
||||
* `install.packages('IRkernel')`
|
||||
* `IRkernel::installspec()`
|
||||
|
||||
run on network:
|
||||
`jupyter notebook --ip 10.0.10.15 --port 8888`
|
||||
|
||||
|
||||
basic packages:
|
||||
* ggplot2, plyr, ggExtra, scales
|
||||
|
||||
|
63
mastodon.md
63
mastodon.md
@ -1,61 +1,4 @@
|
||||
issue with mastodon-streaming service.
|
||||
Mastodon is ruby app.
|
||||
you need a ruby environment manager just like node, need to be aware of this as ruby environment upgrades in combination with mastodon upgrades have not worked well.
|
||||
|
||||
```
|
||||
Jan 19 23:05:26 r710 node[17762]: /home/mastodon/live/node_modules/@clusterws/cws/dist/index.js:34
|
||||
Jan 19 23:05:26 r710 node[17762]: throw e.message = e.message + " check './node_modules/@clusterws/cws/build_log.txt' for post install build logs",
|
||||
Jan 19 23:05:26 r710 node[17762]: ^
|
||||
Jan 19 23:05:26 r710 node[17762]: Error: The module '/home/mastodon/live/node_modules/@clusterws/cws/dist/cws_linux_79.node'
|
||||
Jan 19 23:05:26 r710 node[17762]: was compiled against a different Node.js version using
|
||||
Jan 19 23:05:26 r710 node[17762]: NODE_MODULE_VERSION 72. This version of Node.js requires
|
||||
Jan 19 23:05:26 r710 node[17762]: NODE_MODULE_VERSION 79. Please try re-compiling or re-installing
|
||||
Jan 19 23:05:26 r710 node[17762]: the module (for instance, using `npm rebuild` or `npm install`). check './node_modules/@clusterws/cws/build_log.txt' for post install build logs
|
||||
Jan 19 23:05:26 r710 node[17762]: at Object.Module._extensions..node (internal/modules/cjs/loader.js:1194:18)
|
||||
Jan 19 23:05:26 r710 node[17762]: at Module.load (internal/modules/cjs/loader.js:993:32)
|
||||
Jan 19 23:05:26 r710 node[17762]: at Function.Module._load (internal/modules/cjs/loader.js:892:14)
|
||||
Jan 19 23:05:26 r710 node[17762]: at Module.require (internal/modules/cjs/loader.js:1033:19)
|
||||
Jan 19 23:05:26 r710 node[17762]: at require (internal/modules/cjs/helpers.js:72:18)
|
||||
Jan 19 23:05:26 r710 node[17762]: at /home/mastodon/live/node_modules/@clusterws/cws/dist/index.js:32:16
|
||||
Jan 19 23:05:26 r710 node[17762]: at Object.<anonymous> (/home/mastodon/live/node_modules/@clusterws/cws/dist/index.js:37:3)
|
||||
Jan 19 23:05:26 r710 node[17762]: at Module._compile (internal/modules/cjs/loader.js:1144:30)
|
||||
Jan 19 23:05:26 r710 node[17762]: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1164:10)
|
||||
Jan 19 23:05:26 r710 node[17762]: at Module.load (internal/modules/cjs/loader.js:993:32)
|
||||
```
|
||||
per [node webiste](https://nodejs.org/en/download/releases/) node module version 72 NodeJS v 12.14.1 and npm version 6.13.4
|
||||
|
||||
live/streaming hold the top level code.
|
||||
if I try to run `node index.js` and hardcode the REDIS password, end up with postgres authentication error due to the connection module not supporting SCRAM-SHA-256
|
||||
|
||||
changed that back: real issue was that node was reverting to the latest version instead of LTS when starting the service since NVM is only per session.
|
||||
apt n module makes a permanent version change, used that instead
|
||||
|
||||
so trust fixes a manual run of index.js, but the having that available on port 4000 doesn't help the search function.
|
||||
|
||||
|
||||
now it is clear that search issue doesn't have anything to do with resolved streaming API service.
|
||||
shoudl try service maybe?
|
||||
|
||||
found this in the issues:
|
||||
https://github.com/tootsuite/mastodon/issues/5765
|
||||
|
||||
|
||||
This issue notes a web domain setting, grep of mastodon directory gives:
|
||||
```
|
||||
./live/config/initializers/1_hosts.rb:web_host = ENV.fetch('WEB_DOMAIN') { host }
|
||||
./live/lib/mastodon/premailer_webpack_strategy.rb: asset_host = ENV['CDN_HOST'] || ENV['WEB_DOMAIN'] || ENV['LOCAL_DOMAIN']
|
||||
./live/.env.nanobox:# WEB_DOMAIN=mastodon.example.com
|
||||
./live/.env.nanobox:# The asset host must allow cross origin request from WEB_DOMAIN or LOCAL_DOMAIN
|
||||
./live/.env.nanobox:# if WEB_DOMAIN is not set. For example, the server may have the
|
||||
./live/.env.nanobox:# The attachment host must allow cross origin request from WEB_DOMAIN or
|
||||
./live/.env.nanobox:# LOCAL_DOMAIN if WEB_DOMAIN is not set. For example, the server may have the
|
||||
./live/.env.production.sample:# WEB_DOMAIN=mastodon.example.com
|
||||
./live/.env.production.sample:# The asset host must allow cross origin request from WEB_DOMAIN or LOCAL_DOMAIN
|
||||
./live/.env.production.sample:# if WEB_DOMAIN is not set. For example, the server may have the
|
||||
./live/.env.production.sample:# The attachment host must allow cross origin request from WEB_DOMAIN or
|
||||
./live/.env.production.sample:# LOCAL_DOMAIN if WEB_DOMAIN is not set. For example, the server may have the
|
||||
```
|
||||
|
||||
notes in env.production.example say not to set `WEB_DOMAIN`
|
||||
|
||||
|
||||
|
||||
posted a question on [discourse](https://discourse.joinmastodon.org/t/search-return-404/2490)
|
||||
currently schema changes were not implemented and now the database is in an older state than the code.
|
||||
|
9
matrix.md
Normal file
9
matrix.md
Normal file
@ -0,0 +1,9 @@
|
||||
get access token to work the API:
|
||||
```
|
||||
curl -X POST -H "Content-Type: application/json"d '{"type": "m.login.password", "user": "paul", "password": "password}' https://matrix.hptrow.me/_matrix/client/r0/login
|
||||
```
|
||||
|
||||
target the reset rout:
|
||||
```
|
||||
curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer token-here -d '{"new_password": "new password}' https://matrix.hptrow.me/_synapse/admin/v1/reset_password/@tucker:matrix.hptrow.me
|
||||
```
|
2
matrix_pw_reset.sh
Executable file
2
matrix_pw_reset.sh
Executable file
@ -0,0 +1,2 @@
|
||||
curl -X POST -H "Content-Type: application/json" -d '{"type": "m.login.password", "user": "paul", "password": "gyaswddh1983"}' https://matrix.hptrow.me/_matrix/client/r0/login
|
||||
|
121
multipass.md
Normal file
121
multipass.md
Normal file
@ -0,0 +1,121 @@
|
||||
mutlipass - ubuntu vm's from canonical
|
||||
|
||||
snap install multipass --classic (apparently this option is required and allows the snap to violate it's sandbox??)
|
||||
|
||||
https://multipass.run/
|
||||
|
||||
|
||||
launch an instance:
|
||||
* `multipass launch --name ubuntu-lts`
|
||||
* `multipass stop ubuntu-lts`
|
||||
* `multipass delete ubuntu-lts-custom`
|
||||
* `multipass purge`
|
||||
* `multipass find`
|
||||
|
||||
you have to `sudo multipass shell` to get a sudo-able shell
|
||||
|
||||
|
||||
setup
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
|
||||
echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" |sudo tee /etc/apt/sources.list.d/pgdg.list
|
||||
sudo apt update
|
||||
sudo apt -y install postgresql-12 postgresql-client-12
|
||||
|
||||
sudo apt install nginx
|
||||
sudo apt install nodejs
|
||||
sudo apt install redis
|
||||
sudo apt install npm
|
||||
sudo npm install -g n
|
||||
n lts
|
||||
|
||||
sudo su root
|
||||
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
|
||||
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
|
||||
exit
|
||||
|
||||
sudo apt update
|
||||
|
||||
apt install -y \
|
||||
imagemagick ffmpeg libpq-dev libxml2-dev libxslt1-dev file git-core \
|
||||
g++ libprotobuf-dev protobuf-compiler pkg-config nodejs gcc autoconf \
|
||||
bison build-essential libssl-dev libyaml-dev libreadline6-dev \
|
||||
zlib1g-dev libncurses5-dev libffi-dev libgdbm5 libgdbm-dev \
|
||||
nginx redis-server redis-tools postgresql postgresql-contrib \
|
||||
certbot python-certbot-nginx yarn libidn11-dev libicu-dev libjemalloc-dev
|
||||
```
|
||||
|
||||
install mastodon
|
||||
```
|
||||
adduser --disabled-login mastodon
|
||||
sudo su - mastodon
|
||||
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
|
||||
cd ~/.rbenv && src/configure && make -C src
|
||||
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
|
||||
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
|
||||
exec bash
|
||||
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
|
||||
|
||||
RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 2.6.5
|
||||
rbenv global 2.6.5
|
||||
gem update --system
|
||||
gem install bundler --no-document
|
||||
exit
|
||||
|
||||
sudo -u postgres psql
|
||||
CREATE USER mastodon CREATEDB;
|
||||
\q
|
||||
|
||||
git clone https://github.com/tootsuite/mastodon.git live && cd live
|
||||
git checkout $(git tag -l | grep -v 'rc[0-9]*$' | sort -V | tail -n 1)
|
||||
|
||||
|
||||
bundle install \
|
||||
-j$(getconf _NPROCESSORS_ONLN) \
|
||||
--deployment --without development test
|
||||
yarn install --pure-lockfile
|
||||
```
|
||||
|
||||
need to set database credentials before the env file is built
|
||||
```
|
||||
sudo su postgres
|
||||
psql
|
||||
alter role mastodon password 'mastodon';
|
||||
\q
|
||||
|
||||
sudo vim //etc/redis/redis.conf
|
||||
requirepass password
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
RAILS_ENV=production bundle exec rake mastodon:setup
|
||||
```
|
||||
this will prompt a bunch of setting, after a while mail from localhost = no
|
||||
this will prompt smtp setup
|
||||
```
|
||||
|
||||
compilation failed, complained about memory
|
||||
|
||||
setup nginx files:
|
||||
```
|
||||
cp /home/mastodon/live/dist/nginx.conf /etc/nginx/sites-available/mastodon
|
||||
ln -s /etc/nginx/sites-available/mastodon /etc/nginx/sites-enabled/mastodon
|
||||
```
|
||||
|
||||
then you have to replace example.com with a target domain in the nginx files
|
||||
using vim -> `:%s/example.com/hptrow.me`
|
||||
|
||||
|
||||
copy sevice files:
|
||||
```
|
||||
cp /home/mastodon/live/dist/mastodon-*.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
setup load services
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl start mastodon-web mastodon-sidekiq mastodon-streaming
|
||||
sudo systemctl enable mastodon-*
|
42
mutt.md
42
mutt.md
@ -1,21 +1,21 @@
|
||||
## Office 365 Setup
|
||||
|
||||
[office365 config](https://github.com/ork/mutt-office365)
|
||||
|
||||
[setup html viewer in mutt](http://jasonwryan.com/blog/2012/05/12/mutt/)
|
||||
|
||||
git clone https://github.com/ork/mutt-office365 ./.mutt
|
||||
|
||||
* requires w3m
|
||||
* add this to .mutt/muttrc
|
||||
```
|
||||
auto_view text/html # view html automatically
|
||||
alternative_order text/plain text/enriched text/html # save html for last
|
||||
```
|
||||
* add this to .mutt/mailcap
|
||||
```
|
||||
text/html; w3m -I %{charset} -T text/html; copiousoutput;
|
||||
```
|
||||
|
||||
|
||||
install from source example [here](http://www.guckes.net/Mutt/install.php3)
|
||||
## Office 365 Setup
|
||||
|
||||
[office365 config](https://github.com/ork/mutt-office365)
|
||||
|
||||
[setup html viewer in mutt](http://jasonwryan.com/blog/2012/05/12/mutt/)
|
||||
|
||||
git clone https://github.com/ork/mutt-office365 ./.mutt
|
||||
|
||||
* requires w3m
|
||||
* add this to .mutt/muttrc
|
||||
```
|
||||
auto_view text/html # view html automatically
|
||||
alternative_order text/plain text/enriched text/html # save html for last
|
||||
```
|
||||
* add this to .mutt/mailcap
|
||||
```
|
||||
text/html; w3m -I %{charset} -T text/html; copiousoutput;
|
||||
```
|
||||
|
||||
|
||||
install from source example [here](http://www.guckes.net/Mutt/install.php3)
|
||||
|
5
nginx.md
Normal file
5
nginx.md
Normal file
@ -0,0 +1,5 @@
|
||||
https://nginx.org/en/docs/http/configuring_https_servers.html
|
||||
|
||||
setting up reverse proxy for different sub domains
|
||||
|
||||
https://serverfault.com/questions/753105/how-to-reverse-proxy-to-different-places-depending-on-subdomain-in-nginx
|
@ -1,34 +1,36 @@
|
||||
version management
|
||||
===================================================================================================================================
|
||||
|
||||
## nvm
|
||||
|
||||
can use nvm to manage nodejs versions.
|
||||
not really sure how this works, but it is per user and operates off bash scripts and variables ¯\_(ツ)_/¯
|
||||
|
||||
going to have to pull from github. It will curl to clone the repo and then add some stuff to your bashrc to make it work.
|
||||
`exec bash` after install to refresh
|
||||
|
||||
nvm install --lts
|
||||
nvm use version
|
||||
|
||||
## npm n helper
|
||||
|
||||
npm can be used to manage node itself through n
|
||||
https://github.com/tj/n
|
||||
|
||||
sudo npm cache clean -f
|
||||
|
||||
|
||||
1. setup & *own* directyory with versions
|
||||
* `sudo mkdir -p /usr/local/n`
|
||||
* `sudo chown -R $(whoami) /usr/local/n`
|
||||
2. own the instalation folders
|
||||
* `sudo chown -R $(whoami) /usr/local/bin /usr/local/lib /usr/local/include /usr/local/share`
|
||||
3. install n: `sudo npm install -g n`
|
||||
|
||||
|
||||
## upgrading npm in windows
|
||||
`npm install -g npm-windows-upgrade`
|
||||
or just download from the wesite
|
||||
|
||||
version management
|
||||
===================================================================================================================================
|
||||
|
||||
## nvm
|
||||
|
||||
can use nvm to manage nodejs versions.
|
||||
not really sure how this works, but it is per user and operates off bash scripts and variables
|
||||
|
||||
going to have to pull from github. It will curl to clone the repo and then add some stuff to your bashrc to make it work.
|
||||
`exec bash` after install to refresh
|
||||
|
||||
nvm install --lts
|
||||
nvm use version
|
||||
|
||||
## npm n helper
|
||||
|
||||
npm can be used to manage node itself through n
|
||||
https://github.com/tj/n
|
||||
|
||||
sudo npm cache clean -f
|
||||
|
||||
|
||||
1. setup & *own* directyory with versions
|
||||
* `sudo mkdir -p /usr/local/n`
|
||||
* `sudo chown -R $(whoami) /usr/local/n`
|
||||
2. own the instalation folders
|
||||
* `sudo chown -R $(whoami) /usr/local/bin /usr/local/lib /usr/local/include /usr/local/share`
|
||||
3. install n: `sudo npm install -g n`
|
||||
|
||||
|
||||
## upgrading npm in windows
|
||||
`npm install -g npm-windows-upgrade`
|
||||
or just download from the wesite
|
||||
|
||||
## npm
|
||||
to update npm do `npm install npm@latest -g`
|
5
openssl.md
Normal file
5
openssl.md
Normal file
@ -0,0 +1,5 @@
|
||||
to create a self-signed certificate and bypass supplying a password (using the `-nodes` option)
|
||||
this will also create a crt file that can be used to install to as a trusted root certificate authority
|
||||
`openssl req -x509 -newkey rsa:2048 -keyout privateKey.pem -out certificate.crt -days 365 -nodes`
|
||||
|
||||
|
11
pg_restore.md
Normal file
11
pg_restore.md
Normal file
@ -0,0 +1,11 @@
|
||||
[postgresql.org](https://www.postgresql.org/docs/current/app-pgrestore.html)
|
||||
|
||||
pg_restore has 2 modes. if you just feed it a backup file it produces sql that you woudl have to pipe to psql or a file to run.
|
||||
If you specify -d it will actually connect and execute the restore commands.
|
||||
|
||||
-C combined with -c will drop and create a new database to restore into
|
||||
-O will not assign ownership to the original owner, but whoever is connecting instead
|
||||
|
||||
sample restore command:
|
||||
|
||||
`pg_restore -O -C -c -U ptrowbridge -p 5433 -d ubm -h localhost ubm.backup`
|
@ -1 +1,5 @@
|
||||
"C:\PostgreSQL\perl5\perl\bin\perl.exe" "C:\PostgreSQL\pgbadger\pgbadger" -o "C:\Users\ptrowbridge\Downloads\pgb.html" "C:\PostgreSQL\data\logs\pg10\postgresql-Mon.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Tue.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Wed.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Thu.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Fri.log"
|
||||
creates a file `out.html` by default
|
||||
|
||||
sudo pgbadger --prefix '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h,remote=%r ' //var/log/postgresql/postgresql-2020-02*
|
||||
|
||||
"C:\PostgreSQL\perl5\perl\bin\perl.exe" "C:\PostgreSQL\pgbadger\pgbadger" -o "C:\Users\ptrowbridge\Downloads\pgb.html" "C:\PostgreSQL\data\logs\pg10\postgresql-Mon.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Tue.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Wed.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Thu.log" "C:\PostgreSQL\data\logs\pg10\postgresql-Fri.log"
|
32
pghero.md
Normal file
32
pghero.md
Normal file
@ -0,0 +1,32 @@
|
||||
https://github.com/ankane/pghero/blob/master/guides/Linux.md
|
||||
|
||||
get:
|
||||
```
|
||||
wget -qO- https://dl.packager.io/srv/pghero/pghero/key | sudo apt-key add -
|
||||
sudo wget -O /etc/apt/sources.list.d/pghero.list \
|
||||
https://dl.packager.io/srv/pghero/pghero/master/installer/ubuntu/18.04.repo
|
||||
sudo apt-get update
|
||||
sudo apt-get -y install pghero
|
||||
```
|
||||
|
||||
Add your database. (use \ to escape special passw chars)
|
||||
```
|
||||
sudo pghero config:set DATABASE_URL=postgres://user:password@hostname:5432/dbname
|
||||
```
|
||||
|
||||
And optional authentication.
|
||||
```
|
||||
sudo pghero config:set PGHERO_USERNAME=link
|
||||
sudo pghero config:set PGHERO_PASSWORD=hyrule
|
||||
```
|
||||
|
||||
Start the server
|
||||
```
|
||||
sudo pghero config:set PORT=3001
|
||||
sudo pghero config:set RAILS_LOG_TO_STDOUT=disabled
|
||||
sudo pghero scale web=1
|
||||
```
|
||||
|
||||
Confirm it’s running with:
|
||||
|
||||
`curl -v http://localhost:3001/`
|
@ -1,77 +1,77 @@
|
||||
Logic to setup production plan, inventory balances, purchases, and shipments
|
||||
|
||||
Starting point
|
||||
- known balances STKB
|
||||
- known available BOLH - not posted
|
||||
- known prod schedule SOFT
|
||||
- known shipments Sales Forecast
|
||||
- forecasted orders Sales Forecast
|
||||
- machines that a part can run on ??
|
||||
- actual run-time performance Alternates
|
||||
- actual BOM performance Alternates
|
||||
- actual scrap performance Alternates
|
||||
- available machine time ??
|
||||
|
||||
Populate
|
||||
- forecasted prod schedule
|
||||
- forecasted on-hand (via forecast perpetual transactions)
|
||||
- forecasted available (via forecast transactions)
|
||||
- forecasted purchases
|
||||
|
||||
Iterate through each calendar day
|
||||
1. materialize forecasted purchases
|
||||
1. update on-hand & available
|
||||
2. materialize production
|
||||
1. update on-hand & available
|
||||
3. materialize transfers
|
||||
1. update on-hand & available
|
||||
3. materialize shipments
|
||||
1. update on-hand & available
|
||||
4. process forecasted order submissions
|
||||
1. check for inventory available
|
||||
1. Yes
|
||||
1. mark unavailable
|
||||
2. schedule shipment for request date
|
||||
2. No or partial
|
||||
1. mark unavailable any partial
|
||||
2. schedule on next open slot regardless of request date (each part should be mapped to certain set of machines)
|
||||
1. raw materials available
|
||||
1. Yes
|
||||
1. mark unavailable (at begin prod date?)
|
||||
2. No
|
||||
1. mark unavailable any partial (at begin prod date?)
|
||||
2. schedule a purchase net of lead time
|
||||
2. sub-components available?
|
||||
1. Yes
|
||||
1. mark unavialable (at begin prod date?)
|
||||
2. No
|
||||
1. (return to 4.1.2.2)
|
||||
3. schedule transfer of production after completion if necessary
|
||||
3. schedule shipment for request date, or production date if past request date
|
||||
|
||||
|
||||
snap-shot STKB
|
||||
snap-shot BOLH
|
||||
snap-shot SOFT
|
||||
|
||||
|
||||
some notes
|
||||
-----------------
|
||||
|
||||
* shift schedules
|
||||
* parallel resources
|
||||
* setup time
|
||||
* efficiencies
|
||||
* scrap rates
|
||||
* blends
|
||||
* known 'A' item volumes planned regardless of demand
|
||||
* visibility window for incomming orders
|
||||
* grouping items to reduce change-overs
|
||||
* initial start-up: merge with current machine schedule
|
||||
* limit start date to child item availability
|
||||
* procurement mix
|
||||
* purchase lag
|
||||
* transfer lag
|
||||
* order priority
|
||||
* inventory minimums
|
||||
Logic to setup production plan, inventory balances, purchases, and shipments
|
||||
|
||||
Starting point
|
||||
- known balances STKB
|
||||
- known available BOLH - not posted
|
||||
- known prod schedule SOFT
|
||||
- known shipments Sales Forecast
|
||||
- forecasted orders Sales Forecast
|
||||
- machines that a part can run on ??
|
||||
- actual run-time performance Alternates
|
||||
- actual BOM performance Alternates
|
||||
- actual scrap performance Alternates
|
||||
- available machine time ??
|
||||
|
||||
Populate
|
||||
- forecasted prod schedule
|
||||
- forecasted on-hand (via forecast perpetual transactions)
|
||||
- forecasted available (via forecast transactions)
|
||||
- forecasted purchases
|
||||
|
||||
Iterate through each calendar day
|
||||
1. materialize forecasted purchases
|
||||
1. update on-hand & available
|
||||
2. materialize production
|
||||
1. update on-hand & available
|
||||
3. materialize transfers
|
||||
1. update on-hand & available
|
||||
3. materialize shipments
|
||||
1. update on-hand & available
|
||||
4. process forecasted order submissions
|
||||
1. check for inventory available
|
||||
1. Yes
|
||||
1. mark unavailable
|
||||
2. schedule shipment for request date
|
||||
2. No or partial
|
||||
1. mark unavailable any partial
|
||||
2. schedule on next open slot regardless of request date (each part should be mapped to certain set of machines)
|
||||
1. raw materials available
|
||||
1. Yes
|
||||
1. mark unavailable (at begin prod date?)
|
||||
2. No
|
||||
1. mark unavailable any partial (at begin prod date?)
|
||||
2. schedule a purchase net of lead time
|
||||
2. sub-components available?
|
||||
1. Yes
|
||||
1. mark unavialable (at begin prod date?)
|
||||
2. No
|
||||
1. (return to 4.1.2.2)
|
||||
3. schedule transfer of production after completion if necessary
|
||||
3. schedule shipment for request date, or production date if past request date
|
||||
|
||||
|
||||
snap-shot STKB
|
||||
snap-shot BOLH
|
||||
snap-shot SOFT
|
||||
|
||||
|
||||
some notes
|
||||
-----------------
|
||||
|
||||
* shift schedules
|
||||
* parallel resources
|
||||
* setup time
|
||||
* efficiencies
|
||||
* scrap rates
|
||||
* blends
|
||||
* known 'A' item volumes planned regardless of demand
|
||||
* visibility window for incomming orders
|
||||
* grouping items to reduce change-overs
|
||||
* initial start-up: merge with current machine schedule
|
||||
* limit start date to child item availability
|
||||
* procurement mix
|
||||
* purchase lag
|
||||
* transfer lag
|
||||
* order priority
|
||||
* inventory minimums
|
||||
* tool availability
|
@ -1,51 +1,51 @@
|
||||
A method to planning sales
|
||||
----------------------------
|
||||
|
||||
## Summary
|
||||
|
||||
1. copy history
|
||||
|
||||
1. start with open orders
|
||||
2. add orders as placed in past
|
||||
1. true-up to current run rate
|
||||
1. normalize price for current pricing
|
||||
1. will need to idenitfy blocks in the base period that best represent pricing efforts
|
||||
2. scale prior periods to match final pricing
|
||||
2. exclude expired products/customers
|
||||
3. scale new developments to reflect full-year (new products customers)
|
||||
4. update cost to current
|
||||
5. request date attainment performance
|
||||
3. walk prior period sales to new baseline sales as change in run-rate
|
||||
|
||||
2. build in changes to current run-rate
|
||||
|
||||
1. volume changes
|
||||
2. pricing changes
|
||||
3. new products (must be defined in future at a mininum)
|
||||
4. future cost changes
|
||||
5. request date attainment
|
||||
|
||||
|
||||
|
||||
| timeline | day | running days | responsible |
|
||||
| -------------------------------------------- | --- | ------------ | ----------- |
|
||||
| **_establish run-rate sales_** | | | |
|
||||
| copy history | 1 | 1 | executor |
|
||||
| identify pricing windows | 1 | 2 | sales team |
|
||||
| scale windows to match final | 1 | 3 | executor |
|
||||
| identify expired products/customers | 3 | 6 | sales team |
|
||||
| eliminate expired volume | 1 | 7 | executor |
|
||||
| identify new products/customers | 3 | 10 | sales team |
|
||||
| scale new to full year volume | 1 | 11 | executor |
|
||||
| **_load new plans_** | | | |
|
||||
| layer in planned changes not yet implemented | | | |
|
||||
| identify changes to existing volume | 3 | 14 | sales team |
|
||||
| load changes | 1 | 15 | executor |
|
||||
| identify changes in price | 3 | 18 | sales team |
|
||||
| load changes | 1 | 19 | executor |
|
||||
| identify new products | 3 | 22 | sales team |
|
||||
| load new | 1 | 23 | executor |
|
||||
|
||||
|
||||
|
||||
Table Layout
|
||||
A method to planning sales
|
||||
----------------------------
|
||||
|
||||
## Summary
|
||||
|
||||
1. copy history
|
||||
|
||||
1. start with open orders
|
||||
2. add orders as placed in past
|
||||
1. true-up to current run rate
|
||||
1. normalize price for current pricing
|
||||
1. will need to idenitfy blocks in the base period that best represent pricing efforts
|
||||
2. scale prior periods to match final pricing
|
||||
2. exclude expired products/customers
|
||||
3. scale new developments to reflect full-year (new products customers)
|
||||
4. update cost to current
|
||||
5. request date attainment performance
|
||||
3. walk prior period sales to new baseline sales as change in run-rate
|
||||
|
||||
2. build in changes to current run-rate
|
||||
|
||||
1. volume changes
|
||||
2. pricing changes
|
||||
3. new products (must be defined in future at a mininum)
|
||||
4. future cost changes
|
||||
5. request date attainment
|
||||
|
||||
|
||||
|
||||
| timeline | day | running days | responsible |
|
||||
| -------------------------------------------- | --- | ------------ | ----------- |
|
||||
| **_establish run-rate sales_** | | | |
|
||||
| copy history | 1 | 1 | executor |
|
||||
| identify pricing windows | 1 | 2 | sales team |
|
||||
| scale windows to match final | 1 | 3 | executor |
|
||||
| identify expired products/customers | 3 | 6 | sales team |
|
||||
| eliminate expired volume | 1 | 7 | executor |
|
||||
| identify new products/customers | 3 | 10 | sales team |
|
||||
| scale new to full year volume | 1 | 11 | executor |
|
||||
| **_load new plans_** | | | |
|
||||
| layer in planned changes not yet implemented | | | |
|
||||
| identify changes to existing volume | 3 | 14 | sales team |
|
||||
| load changes | 1 | 15 | executor |
|
||||
| identify changes in price | 3 | 18 | sales team |
|
||||
| load changes | 1 | 19 | executor |
|
||||
| identify new products | 3 | 22 | sales team |
|
||||
| load new | 1 | 23 | executor |
|
||||
|
||||
|
||||
|
||||
Table Layout
|
||||
|
13
plv8.md
Normal file
13
plv8.md
Normal file
@ -0,0 +1,13 @@
|
||||
|
||||
[PLV8 Documentation For Building](https://plv8.github.io/#building)
|
||||
|
||||
* To build plv8 you have to download a tarbal of the code and use `make`
|
||||
* Beyond the listed dependencies I had to `apt-get install postgresql-server-dev` to get a `postgres.h` file that is needed
|
||||
|
||||
* install dependencies
|
||||
`sudo apt-get install libtinfo5 build-essential pkg-config cmake git postgresql-server-dev-15`
|
||||
* get source
|
||||
`sudo git clone https://github.com/plv8/plv8.git`
|
||||
`sudo git checkout v3.2.0`
|
||||
`sudo make`
|
||||
`sudo make install`
|
35
postgres-odbc.ps1
Normal file
35
postgres-odbc.ps1
Normal file
@ -0,0 +1,35 @@
|
||||
# Check if the script is running as administrator
|
||||
$isAdmin = ([System.Security.Principal.WindowsIdentity]::GetCurrent()).groups -match "S-1-5-32-544"
|
||||
if (-not $isAdmin) {
|
||||
# Re-launch the script with elevated privileges
|
||||
Start-Process powershell -ArgumentList "-NoProfile -ExecutionPolicy Bypass -File `"$PSCommandPath`"" -Verb RunAs
|
||||
exit
|
||||
}
|
||||
|
||||
# note elevation
|
||||
Write-Host "Running with administrator privileges"
|
||||
|
||||
# setup sources and destinations
|
||||
$zipUrl = "https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_15_00_0000.zip"
|
||||
$outputPath = Join-Path $env:USERPROFILE "Downloads\psqlodbc.zip"
|
||||
$extractPath = Join-Path $env:USERPROFILE "Downloads\psqlodbc"
|
||||
|
||||
# Download the ZIP file
|
||||
Invoke-WebRequest -Uri $zipUrl -OutFile $outputPath
|
||||
|
||||
# Extract the contents
|
||||
Expand-Archive -Path $outputPath -DestinationPath $extractPath -Force
|
||||
|
||||
# Find and run the MSI file
|
||||
$msiFile = Get-ChildItem -Path $extractPath -Filter "psqlodbc-setup*" | Where-Object { ! $_.PSIsContainer }
|
||||
if ($msiFile) {
|
||||
Start-Process -FilePath $msiFile.FullName -Wait
|
||||
}
|
||||
|
||||
# Remove downloaded files and extracted contents
|
||||
Remove-Item $outputPath -Force
|
||||
Remove-Item $extractPath -Recurse -Force
|
||||
|
||||
# setup odbc dsn
|
||||
Add-OdbcDsn -Name "usmidsap02" -DriverName "PostgreSQL Unicode(x64)" -DsnType "System" -SetPropertyValue @("Server=usmidsap02", "Database=ubm", "UserName=report","Password=report")
|
||||
|
280
postgres.md
Normal file
280
postgres.md
Normal file
@ -0,0 +1,280 @@
|
||||
Install
|
||||
=========================================================
|
||||
|
||||
[PostgreSQL: Linux downloads (Ubuntu)](https://www.postgresql.org/download/linux/ubuntu/)
|
||||
```
|
||||
# Create the file repository configuration:
|
||||
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
|
||||
|
||||
# Import the repository signing key:
|
||||
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
|
||||
|
||||
# Update the package lists:
|
||||
sudo apt-get update
|
||||
|
||||
# Install the latest version of PostgreSQL.
|
||||
# If you want a specific version, use 'postgresql-12' or similar instead of 'postgresql':
|
||||
sudo apt-get -y install postgresql
|
||||
```
|
||||
|
||||
|
||||
|
||||
SSPI
|
||||
========================================================
|
||||
|
||||
setup for single sign on with [SSPI](https://wiki.postgresql.org/wiki/Configuring_for_single_sign-on_using_SSPI_on_Windows)
|
||||
|
||||
md5 hash is salted with username in front
|
||||
|
||||
Memory
|
||||
=========================================================
|
||||
see whats in the buffer cache with pg_buffercache
|
||||
|
||||
`CREATE EXTENSION pg_buffercache`
|
||||
|
||||
```
|
||||
SELECT
|
||||
c.relname,
|
||||
COUNT(*) AS buffers
|
||||
FROM
|
||||
pg_class c
|
||||
INNER JOIN pg_buffercache b ON
|
||||
b.relfilenode = c.relfilenode
|
||||
INNER JOIN pg_database d ON
|
||||
( b.reldatabase = d.oid
|
||||
AND d.datname = CURRENT_DATABASE())
|
||||
GROUP BY
|
||||
c.relname
|
||||
ORDER BY
|
||||
2 DESC
|
||||
LIMIT 100;
|
||||
```
|
||||
|
||||
Alter Column
|
||||
==========================================================
|
||||
ALTER TABLE rlarp.pcore ALTER COLUMN pack SET DATA TYPE numeric USING pack::numeric
|
||||
|
||||
psql binary for latest version is always used but pg_dump is not, you have to set the default version in ~/.postgresqlrc
|
||||
|
||||
PostGIS
|
||||
==========================================================
|
||||
|
||||
quickstart tutorial
|
||||
|
||||
https://blog.crunchydata.com/blog/postgis-for-newbies
|
||||
|
||||
Move Data Directory
|
||||
===========================================================
|
||||
https://www.digitalocean.com/community/tutorials/how-to-move-a-postgresql-data-directory-to-a-new-location-on-ubuntu-18-04
|
||||
|
||||
copy the data
|
||||
`sudo rsync -av /var/lib/postgresql /target_dir`
|
||||
|
||||
rename original as a backup
|
||||
`sudo mv /var/lib/postgresql/10/main /var/lib/postgresql/10/main.bak`
|
||||
|
||||
point postgres to the new data directory
|
||||
`sudo vim //etc/postgresql/14/main/postgres.conf`
|
||||
` data_directory = '/mnt/volume_nyc1_01/postgresql/10/main'`
|
||||
|
||||
remove the old data
|
||||
`sudo rm -Rf /var/lib/postgresql/10/main.bak`
|
||||
|
||||
Special Aggregates
|
||||
==========================================================
|
||||
To extract aggregate definitions can select from `pg_aggregate`
|
||||
|
||||
|
||||
SQL for current aggregates I'm using now:
|
||||
```
|
||||
CREATE OR REPLACE FUNCTION public.jsonb_concat(
|
||||
state jsonb,
|
||||
concat jsonb)
|
||||
RETURNS jsonb AS
|
||||
$BODY$
|
||||
BEGIN
|
||||
--RAISE notice 'state is %', state;
|
||||
--RAISE notice 'concat is %', concat;
|
||||
RETURN state || concat;
|
||||
END;
|
||||
$BODY$
|
||||
LANGUAGE plpgsql VOLATILE
|
||||
COST 100;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION public.jsonb_concat_distinct_arr(
|
||||
state jsonb,
|
||||
concat jsonb)
|
||||
RETURNS jsonb AS
|
||||
$BODY$
|
||||
BEGIN
|
||||
--RAISE notice 'state is %', state;
|
||||
--RAISE notice 'concat is %', concat;
|
||||
RETURN SELECT jsonb_agg(state || concat;
|
||||
END;
|
||||
$BODY$
|
||||
LANGUAGE plpgsql VOLATILE
|
||||
COST 100;
|
||||
|
||||
|
||||
DROP AGGREGATE IF EXISTS public.jsonb_arr_aggc(jsonb);
|
||||
CREATE AGGREGATE public.jsonb_arr_aggc(jsonb) (
|
||||
SFUNC=public.jsonb_concat,
|
||||
STYPE=jsonb,
|
||||
INITCOND='[]'
|
||||
);
|
||||
|
||||
DROP AGGREGATE IF EXISTS public.jsonb_obj_aggc(jsonb);
|
||||
CREATE AGGREGATE public.jsonb_obj_aggc(jsonb) (
|
||||
SFUNC=public.jsonb_concat,
|
||||
STYPE=jsonb,
|
||||
INITCOND='{}'
|
||||
);
|
||||
|
||||
CREATE OR REPLACE FUNCTION public.jsonb_array_add_distinct(_arr jsonb, _add text) RETURNS jsonb AS
|
||||
$$
|
||||
DECLARE
|
||||
_ret jsonb;
|
||||
|
||||
BEGIN
|
||||
|
||||
SELECT
|
||||
jsonb_agg(DISTINCT x.ae)
|
||||
INTO
|
||||
_ret
|
||||
FROM
|
||||
(
|
||||
SELECT jsonb_array_elements_text(_arr) ae
|
||||
UNION ALL
|
||||
SELECT _add ae
|
||||
) x;
|
||||
|
||||
RETURN _ret;
|
||||
|
||||
END;
|
||||
$$
|
||||
language plpgsql
|
||||
DROP FUNCTION IF EXISTS public.jsonb_array_string_agg;
|
||||
CREATE FUNCTION public.jsonb_array_string_agg(_arr jsonb, _delim text) RETURNS text AS
|
||||
$$
|
||||
DECLARE
|
||||
_ret text;
|
||||
|
||||
BEGIN
|
||||
|
||||
SELECT
|
||||
string_agg(ae.v,_delim)
|
||||
INTO
|
||||
_ret
|
||||
FROM
|
||||
jsonb_array_elements_text(_arr) ae(v);
|
||||
|
||||
return _ret;
|
||||
|
||||
END;
|
||||
$$
|
||||
LANGUAGE plpgsql;
|
||||
|
||||
|
||||
```
|
||||
|
||||
PSQL
|
||||
===============================================================
|
||||
use -E to show definitions of SQL used for \d commands
|
||||
|
||||
Descriptions
|
||||
===================================================
|
||||
```
|
||||
SELECT
|
||||
c.relname table_name,
|
||||
td.description table_description,
|
||||
n.nspname schema_name,
|
||||
a.attname As column_name,
|
||||
cd.description column_description
|
||||
FROM
|
||||
pg_class As c
|
||||
INNER JOIN pg_attribute As a ON
|
||||
c.oid = a.attrelid
|
||||
LEFT JOIN pg_namespace n ON
|
||||
n.oid = c.relnamespace
|
||||
LEFT JOIN pg_tablespace t ON
|
||||
t.oid = c.reltablespace
|
||||
LEFT JOIN pg_description As cd ON
|
||||
cd.objoid = c.oid
|
||||
AND cd.objsubid = a.attnum
|
||||
LEFT JOIN pg_description As td ON
|
||||
td.objoid = c.oid
|
||||
AND td.objsubid = 0
|
||||
WHERE
|
||||
c.relkind IN('r', 'v')
|
||||
--AND a.attname = 'd07txn'
|
||||
AND cd.description like '%Transaction Number%'
|
||||
ORDER BY
|
||||
n.nspname,
|
||||
c.relname,
|
||||
a.attname
|
||||
```
|
||||
|
||||
Foreign Data Wrapper
|
||||
===============================================================
|
||||
```
|
||||
CREATE EXTENSION postgres_fdw;
|
||||
|
||||
CREATE SERVER hptrow
|
||||
FOREIGN DATA WRAPPER postgres_fdw
|
||||
OPTIONS (host 'hptrow.me', port '54339', dbname 'ubm');
|
||||
|
||||
CREATE USER MAPPING FOR ptrowbridge
|
||||
SERVER hptrow
|
||||
OPTIONS (user 'ptrowbridge', password 'gyaswddh1983');
|
||||
|
||||
CREATE SCHEMA frlarp;
|
||||
|
||||
IMPORT FOREIGN SCHEMA rlarp
|
||||
FROM SERVER hptrow INTO frlarp;
|
||||
```
|
||||
|
||||
|
||||
User DDL
|
||||
===============================================================
|
||||
```
|
||||
DROP USER IF EXISTS api;
|
||||
|
||||
SET password_encryption = 'scram-sha-256';
|
||||
|
||||
CREATE ROLE api WITH
|
||||
LOGIN
|
||||
NOSUPERUSER
|
||||
NOCREATEDB
|
||||
NOCREATEROLE
|
||||
INHERIT
|
||||
NOREPLICATION
|
||||
CONNECTION LIMIT -1
|
||||
PASSWORD 'api';
|
||||
|
||||
--------------------grant--------------------------------------------------
|
||||
|
||||
GRANT USAGE ON SCHEMA lgdat TO api;
|
||||
|
||||
GRANT SELECT /*, UPDATE, INSERT, DELETE*/ ON ALL TABLES IN SCHEMA lgdat TO api;
|
||||
|
||||
GRANT USAGE ON ALL SEQUENCES IN SCHEMA lgdat TO api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat GRANT SELECT/*, UPDATE, INSERT, DELETE*/ ON TABLES TO api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat GRANT USAGE ON SEQUENCES TO api;
|
||||
|
||||
---------------------------revoke---------------------------------------
|
||||
|
||||
REVOKE USAGE ON SCHEMA lgdat FROM api;
|
||||
|
||||
REVOKE USAGE ON SCHEMA lgdat FROM api;
|
||||
|
||||
REVOKE SELECT , UPDATE, INSERT, DELETE ON ALL TABLES IN SCHEMA lgdat FROM api;
|
||||
|
||||
REVOKE USAGE ON ALL SEQUENCES IN SCHEMA lgdat FROM api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat REVOKE SELECT, UPDATE, INSERT, DELETE ON TABLES FROM api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat REVOKE USAGE ON SEQUENCES FROM api;
|
||||
```
|
@ -1,37 +0,0 @@
|
||||
[mailing_list](https://www.postgresql.org/message-id/flat/CAHq%2BKHJOvZT8M-o_sE%2BQzqqBGnUjNubWo_rRmpHZyw5ZUuaseg%40mail.gmail.com)
|
||||
|
||||
|
||||
wouldn't that be Pg authing against the OS (pam) which in turn is forwarding to krb5? which seems like an extra added step
|
||||
|
||||
sfrost [11:11 AM]
|
||||
it's basically this:
|
||||
ktpass -out postgres.keytab -princ
|
||||
POSTGRES/centos(at)MY(dot)TESTDOMAIN(dot)LAN -mapUser enterprisedb -pass XXXXXX
|
||||
-crypto DES-CBC-MD5
|
||||
(except adjusted a bit to make it not use a shitty crypto)
|
||||
you use ktpass to create your keytab file
|
||||
copy the keytab file to the Linux box
|
||||
|
||||
arossouw [11:12 AM]
|
||||
Seems like effort, i'll just play dumb on that one
|
||||
|
||||
sfrost [11:12 AM]
|
||||
oh, gotta fix the princ too or whatever
|
||||
but it's not that hard
|
||||
and you might have to configure the realms, but not necessairly (that info is often in DNS already)
|
||||
then you just tell PG where the keytab file is, set gssapi in PG's hba.conf, and create your users using their princ names, like 'sfrost@SNOWMAN.NET'
|
||||
|
||||
dtseiler [11:13 AM]
|
||||
I’m with @hunleyd, I’d love to see a great howto post on that.
|
||||
|
||||
arossouw [11:14 AM]
|
||||
I suppose the question is what is the advantage of using kerberos, and then deciding if its worth spending time on
|
||||
|
||||
sfrost [11:14 AM]
|
||||
I just wrote it
|
||||
^^^ see above
|
||||
also wrote the advantage...
|
||||
|
||||
|
||||
hunleyd [11:14 AM]
|
||||
maybe i'll try this as a 10% project some day
|
@ -1,43 +0,0 @@
|
||||
CREATE OR REPLACE FUNCTION public.jsonb_concat(
|
||||
state jsonb,
|
||||
concat jsonb)
|
||||
RETURNS jsonb AS
|
||||
$BODY$
|
||||
BEGIN
|
||||
--RAISE notice 'state is %', state;
|
||||
--RAISE notice 'concat is %', concat;
|
||||
RETURN state || concat;
|
||||
END;
|
||||
$BODY$
|
||||
LANGUAGE plpgsql VOLATILE
|
||||
COST 100;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION public.jsonb_concat_distinct_arr(
|
||||
state jsonb,
|
||||
concat jsonb)
|
||||
RETURNS jsonb AS
|
||||
$BODY$
|
||||
BEGIN
|
||||
--RAISE notice 'state is %', state;
|
||||
--RAISE notice 'concat is %', concat;
|
||||
RETURN SELECT jsonb_agg(state || concat;
|
||||
END;
|
||||
$BODY$
|
||||
LANGUAGE plpgsql VOLATILE
|
||||
COST 100;
|
||||
|
||||
|
||||
DROP AGGREGATE IF EXISTS public.jsonb_arr_aggc(jsonb);
|
||||
CREATE AGGREGATE public.jsonb_arr_aggc(jsonb) (
|
||||
SFUNC=public.jsonb_concat,
|
||||
STYPE=jsonb,
|
||||
INITCOND='[]'
|
||||
);
|
||||
|
||||
DROP AGGREGATE IF EXISTS public.jsonb_obj_aggc(jsonb);
|
||||
CREATE AGGREGATE public.jsonb_obj_aggc(jsonb) (
|
||||
SFUNC=public.jsonb_concat,
|
||||
STYPE=jsonb,
|
||||
INITCOND='{}'
|
||||
);
|
@ -1,28 +0,0 @@
|
||||
SELECT
|
||||
c.relname table_name,
|
||||
td.description table_description,
|
||||
n.nspname schema_name,
|
||||
a.attname As column_name,
|
||||
cd.description column_description
|
||||
FROM
|
||||
pg_class As c
|
||||
INNER JOIN pg_attribute As a ON
|
||||
c.oid = a.attrelid
|
||||
LEFT JOIN pg_namespace n ON
|
||||
n.oid = c.relnamespace
|
||||
LEFT JOIN pg_tablespace t ON
|
||||
t.oid = c.reltablespace
|
||||
LEFT JOIN pg_description As cd ON
|
||||
cd.objoid = c.oid
|
||||
AND cd.objsubid = a.attnum
|
||||
LEFT JOIN pg_description As td ON
|
||||
td.objoid = c.oid
|
||||
AND td.objsubid = 0
|
||||
WHERE
|
||||
c.relkind IN('r', 'v')
|
||||
--AND a.attname = 'd07txn'
|
||||
AND cd.description like '%Transaction Number%'
|
||||
ORDER BY
|
||||
n.nspname,
|
||||
c.relname,
|
||||
a.attname
|
@ -1,99 +0,0 @@
|
||||
# PostgreSQL Client Authentication Configuration File
|
||||
# ===================================================
|
||||
#
|
||||
# Refer to the "Client Authentication" section in the PostgreSQL
|
||||
# documentation for a complete description of this file. A short
|
||||
# synopsis follows.
|
||||
#
|
||||
# This file controls: which hosts are allowed to connect, how clients
|
||||
# are authenticated, which PostgreSQL user names they can use, which
|
||||
# databases they can access. Records take one of these forms:
|
||||
#
|
||||
# local DATABASE USER METHOD [OPTIONS]
|
||||
# host DATABASE USER ADDRESS METHOD [OPTIONS]
|
||||
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
|
||||
# hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
|
||||
#
|
||||
# (The uppercase items must be replaced by actual values.)
|
||||
#
|
||||
# The first field is the connection type: "local" is a Unix-domain
|
||||
# socket, "host" is either a plain or SSL-encrypted TCP/IP socket,
|
||||
# "hostssl" is an SSL-encrypted TCP/IP socket, and "hostnossl" is a
|
||||
# plain TCP/IP socket.
|
||||
#
|
||||
# DATABASE can be "all", "sameuser", "samerole", "replication", a
|
||||
# database name, or a comma-separated list thereof. The "all"
|
||||
# keyword does not match "replication". Access to replication
|
||||
# must be enabled in a separate record (see example below).
|
||||
#
|
||||
# USER can be "all", a user name, a group name prefixed with "+", or a
|
||||
# comma-separated list thereof. In both the DATABASE and USER fields
|
||||
# you can also write a file name prefixed with "@" to include names
|
||||
# from a separate file.
|
||||
#
|
||||
# ADDRESS specifies the set of hosts the record matches. It can be a
|
||||
# host name, or it is made up of an IP address and a CIDR mask that is
|
||||
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
|
||||
# specifies the number of significant bits in the mask. A host name
|
||||
# that starts with a dot (.) matches a suffix of the actual host name.
|
||||
# Alternatively, you can write an IP address and netmask in separate
|
||||
# columns to specify the set of hosts. Instead of a CIDR-address, you
|
||||
# can write "samehost" to match any of the server's own IP addresses,
|
||||
# or "samenet" to match any address in any subnet that the server is
|
||||
# directly connected to.
|
||||
#
|
||||
# METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
|
||||
# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
|
||||
# Note that "password" sends passwords in clear text; "md5" or
|
||||
# "scram-sha-256" are preferred since they send encrypted passwords.
|
||||
#
|
||||
# OPTIONS are a set of options for the authentication in the format
|
||||
# NAME=VALUE. The available options depend on the different
|
||||
# authentication methods -- refer to the "Client Authentication"
|
||||
# section in the documentation for a list of which options are
|
||||
# available for which authentication methods.
|
||||
#
|
||||
# Database and user names containing spaces, commas, quotes and other
|
||||
# special characters must be quoted. Quoting one of the keywords
|
||||
# "all", "sameuser", "samerole" or "replication" makes the name lose
|
||||
# its special character, and just match a database or username with
|
||||
# that name.
|
||||
#
|
||||
# This file is read on server startup and when the server receives a
|
||||
# SIGHUP signal. If you edit the file on a running system, you have to
|
||||
# SIGHUP the server for the changes to take effect, run "pg_ctl reload",
|
||||
# or execute "SELECT pg_reload_conf()".
|
||||
#
|
||||
# Put your actual configuration here
|
||||
# ----------------------------------
|
||||
#
|
||||
# If you want to allow non-local connections, you need to add more
|
||||
# "host" records. In that case you will also need to make PostgreSQL
|
||||
# listen on a non-local interface via the listen_addresses
|
||||
# configuration parameter, or via the -i or -h command line switches.
|
||||
|
||||
|
||||
|
||||
|
||||
# DO NOT DISABLE!
|
||||
# If you change this first entry you will need to make sure that the
|
||||
# database superuser can access the database using some other method.
|
||||
# Noninteractive access to all databases is required during automatic
|
||||
# maintenance (custom daily cronjobs, replication, and similar tasks).
|
||||
#
|
||||
# Database administrative login by Unix domain socket
|
||||
local all postgres peer
|
||||
|
||||
# TYPE DATABASE USER ADDRESS METHOD
|
||||
|
||||
# "local" is for Unix domain socket connections only
|
||||
local all all peer
|
||||
# IPv4 local connections:
|
||||
host all all 127.0.0.1/32 md5
|
||||
# IPv6 local connections:
|
||||
host all all ::1/128 md5
|
||||
# Allow replication connections from localhost, by a user with the
|
||||
# replication privilege.
|
||||
local replication all peer
|
||||
host replication all 127.0.0.1/32 md5
|
||||
host replication all ::1/128 md5
|
@ -1,107 +0,0 @@
|
||||
# PostgreSQL Client Authentication Configuration File
|
||||
# ===================================================
|
||||
#
|
||||
# Refer to the "Client Authentication" section in the PostgreSQL
|
||||
# documentation for a complete description of this file. A short
|
||||
# synopsis follows.
|
||||
#
|
||||
# This file controls: which hosts are allowed to connect, how clients
|
||||
# are authenticated, which PostgreSQL user names they can use, which
|
||||
# databases they can access. Records take one of these forms:
|
||||
#
|
||||
# local DATABASE USER METHOD [OPTIONS]
|
||||
# host DATABASE USER ADDRESS METHOD [OPTIONS]
|
||||
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
|
||||
# hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
|
||||
#
|
||||
# (The uppercase items must be replaced by actual values.)
|
||||
#
|
||||
# The first field is the connection type: "local" is a Unix-domain
|
||||
# socket, "host" is either a plain or SSL-encrypted TCP/IP socket,
|
||||
# "hostssl" is an SSL-encrypted TCP/IP socket, and "hostnossl" is a
|
||||
# plain TCP/IP socket.
|
||||
#
|
||||
# DATABASE can be "all", "sameuser", "samerole", "replication", a
|
||||
# database name, or a comma-separated list thereof. The "all"
|
||||
# keyword does not match "replication". Access to replication
|
||||
# must be enabled in a separate record (see example below).
|
||||
#
|
||||
# USER can be "all", a user name, a group name prefixed with "+", or a
|
||||
# comma-separated list thereof. In both the DATABASE and USER fields
|
||||
# you can also write a file name prefixed with "@" to include names
|
||||
# from a separate file.
|
||||
#
|
||||
# ADDRESS specifies the set of hosts the record matches. It can be a
|
||||
# host name, or it is made up of an IP address and a CIDR mask that is
|
||||
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
|
||||
# specifies the number of significant bits in the mask. A host name
|
||||
# that starts with a dot (.) matches a suffix of the actual host name.
|
||||
# Alternatively, you can write an IP address and netmask in separate
|
||||
# columns to specify the set of hosts. Instead of a CIDR-address, you
|
||||
# can write "samehost" to match any of the server's own IP addresses,
|
||||
# or "samenet" to match any address in any subnet that the server is
|
||||
# directly connected to.
|
||||
#
|
||||
# METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
|
||||
# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
|
||||
# Note that "password" sends passwords in clear text; "md5" or
|
||||
# "scram-sha-256" are preferred since they send encrypted passwords.
|
||||
#
|
||||
# OPTIONS are a set of options for the authentication in the format
|
||||
# NAME=VALUE. The available options depend on the different
|
||||
# authentication methods -- refer to the "Client Authentication"
|
||||
# section in the documentation for a list of which options are
|
||||
# available for which authentication methods.
|
||||
#
|
||||
# Database and user names containing spaces, commas, quotes and other
|
||||
# special characters must be quoted. Quoting one of the keywords
|
||||
# "all", "sameuser", "samerole" or "replication" makes the name lose
|
||||
# its special character, and just match a database or username with
|
||||
# that name.
|
||||
#
|
||||
# This file is read on server startup and when the server receives a
|
||||
# SIGHUP signal. If you edit the file on a running system, you have to
|
||||
# SIGHUP the server for the changes to take effect, run "pg_ctl reload",
|
||||
# or execute "SELECT pg_reload_conf()".
|
||||
#
|
||||
# Put your actual configuration here
|
||||
# ----------------------------------
|
||||
#
|
||||
# If you want to allow non-local connections, you need to add more
|
||||
# "host" records. In that case you will also need to make PostgreSQL
|
||||
# listen on a non-local interface via the listen_addresses
|
||||
# configuration parameter, or via the -i or -h command line switches.
|
||||
|
||||
|
||||
|
||||
|
||||
# DO NOT DISABLE!
|
||||
# If you change this first entry you will need to make sure that the
|
||||
# database superuser can access the database using some other method.
|
||||
# Noninteractive access to all databases is required during automatic
|
||||
# maintenance (custom daily cronjobs, replication, and similar tasks).
|
||||
#
|
||||
# Database administrative login by Unix domain socket
|
||||
#local all postgres peer
|
||||
|
||||
# TYPE DATABASE USER ADDRESS METHOD
|
||||
# IPv4 local & remote connections:
|
||||
host ubm report 127.0.0.1/32 trust
|
||||
host ubm powerbi 127.0.0.1/32 trust
|
||||
host ubm api 127.0.0.1/32 md5
|
||||
host dev api 127.0.0.1/32 md5
|
||||
host all all 127.0.0.1/32 scram-sha-256
|
||||
host ubm report 0.0.0.0/0 trust
|
||||
host ubm api 0.0.0.0/0 md5
|
||||
host dev api 0.0.0.0/0 md5
|
||||
host ubm ptrowbridge_md5 0.0.0.0/0 md5
|
||||
host all all 0.0.0.0/0 scram-sha-256
|
||||
# IPv6 local connections:
|
||||
host ubm report fe80::/10 trust
|
||||
host ubm powerbi fe80::/10 trust
|
||||
host ubm api fe80::/10 md5
|
||||
host dev api fe80::/10 md5
|
||||
host ubm ptrowbridge_md5 fe80::/10 md5
|
||||
host all all fe80::/10 scram-sha-256
|
||||
host all all ::/10 scram-sha-256
|
||||
host all all ::/0 scram-sha-256
|
@ -1,29 +0,0 @@
|
||||
setup for single sign on with [SSPI](https://wiki.postgresql.org/wiki/Configuring_for_single_sign-on_using_SSPI_on_Windows)
|
||||
|
||||
md5 hash is salted with username in front
|
||||
|
||||
|
||||
Memory
|
||||
=========================================================
|
||||
see whats in the buffer cache with pg_buffercache
|
||||
|
||||
`CREATE EXTENSION pg_buffercache`
|
||||
|
||||
```
|
||||
SELECT
|
||||
c.relname,
|
||||
COUNT(*) AS buffers
|
||||
FROM
|
||||
pg_class c
|
||||
INNER JOIN pg_buffercache b ON
|
||||
b.relfilenode = c.relfilenode
|
||||
INNER JOIN pg_database d ON
|
||||
( b.reldatabase = d.oid
|
||||
AND d.datname = CURRENT_DATABASE())
|
||||
GROUP BY
|
||||
c.relname
|
||||
ORDER BY
|
||||
2 DESC
|
||||
LIMIT 100;
|
||||
```
|
||||
|
@ -1,36 +0,0 @@
|
||||
Version 10 Features
|
||||
===================
|
||||
|
||||
Auto Logging [blog](http://databasedoings.blogspot.com/2017/07/cool-stuff-in-postgresql-10-auto-logging.html)
|
||||
|
||||
Transition Tables [blog](http://databasedoings.blogspot.com/2017/07/cool-stuff-in-postgresql-10-transition.html)
|
||||
|
||||
Correlated Columns Query Plan [blog](https://blog.2ndquadrant.com/pg-phriday-crazy-correlated-column-crusade/)
|
||||
|
||||
Native Partitioning
|
||||
|
||||
Logical Replication
|
||||
|
||||
Add a version of jsonb's delete operator that takes an array of keys to delete (Magnus Hagander)
|
||||
|
||||
Make json_populate_record() and related functions process JSON arrays and objects recursively (Nikita Glukhov)
|
||||
|
||||
Identity Columns [blog](https://blog.2ndquadrant.com/postgresql-10-identity-columns/)
|
||||
|
||||
Add view pg_hba_file_rules to display the contents of pg_hba.conf (Haribabu Kommi)
|
||||
|
||||
Add XMLTABLE function that converts XML-formatted data into a row set (Pavel Stehule, Álvaro Herrera)
|
||||
|
||||
|
||||
Security
|
||||
===================
|
||||
|
||||
LDAP & Active Directory [blog](https://www.openscg.com/2017/07/setting-up-ldap-with-active-directory-in-postgresql/)
|
||||
|
||||
Add SCRAM-SHA-256 support for password negotiation and storage (Michael Paquier, Heikki Linnakangas)
|
||||
|
||||
|
||||
Monitoring
|
||||
====================
|
||||
|
||||
file system info - [pg_stat_kcache](https://rjuju.github.io/postgresql/2018/07/17/pg_stat_kcache-2-1-is-out.html)
|
@ -1,697 +0,0 @@
|
||||
# -----------------------------
|
||||
# PostgreSQL configuration file
|
||||
# -----------------------------
|
||||
#
|
||||
# This file consists of lines of the form:
|
||||
#
|
||||
# name = value
|
||||
#
|
||||
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
|
||||
# "#" anywhere on a line. The complete list of parameter names and allowed
|
||||
# values can be found in the PostgreSQL documentation.
|
||||
#
|
||||
# The commented-out settings shown in this file represent the default values.
|
||||
# Re-commenting a setting is NOT sufficient to revert it to the default value;
|
||||
# you need to reload the server.
|
||||
#
|
||||
# This file is read on server startup and when the server receives a SIGHUP
|
||||
# signal. If you edit the file on a running system, you have to SIGHUP the
|
||||
# server for the changes to take effect, run "pg_ctl reload", or execute
|
||||
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
|
||||
# require a server shutdown and restart to take effect.
|
||||
#
|
||||
# Any parameter can also be given as a command-line option to the server, e.g.,
|
||||
# "postgres -c log_connections=on". Some parameters can be changed at run time
|
||||
# with the "SET" SQL command.
|
||||
#
|
||||
# Memory units: kB = kilobytes Time units: ms = milliseconds
|
||||
# MB = megabytes s = seconds
|
||||
# GB = gigabytes min = minutes
|
||||
# TB = terabytes h = hours
|
||||
# d = days
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# FILE LOCATIONS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# The default values of these variables are driven from the -D command-line
|
||||
# option or PGDATA environment variable, represented here as ConfigDir.
|
||||
|
||||
#data_directory = 'ConfigDir' # use data in another directory
|
||||
# (change requires restart)
|
||||
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
|
||||
# (change requires restart)
|
||||
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
|
||||
# (change requires restart)
|
||||
|
||||
# If external_pid_file is not explicitly set, no extra PID file is written.
|
||||
#external_pid_file = '' # write an extra PID file
|
||||
# (change requires restart)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CONNECTIONS AND AUTHENTICATION
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Connection Settings -
|
||||
|
||||
listen_addresses = '*' # what IP address(es) to listen on;
|
||||
# comma-separated list of addresses;
|
||||
# defaults to 'localhost'; use '*' for all
|
||||
# (change requires restart)
|
||||
port = 5432 # (change requires restart)
|
||||
max_connections = 100 # (change requires restart)
|
||||
#superuser_reserved_connections = 3 # (change requires restart)
|
||||
#unix_socket_directories = '' # comma-separated list of directories
|
||||
# (change requires restart)
|
||||
#unix_socket_group = '' # (change requires restart)
|
||||
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
|
||||
# (change requires restart)
|
||||
#bonjour = off # advertise server via Bonjour
|
||||
# (change requires restart)
|
||||
#bonjour_name = '' # defaults to the computer name
|
||||
# (change requires restart)
|
||||
|
||||
# - TCP Keepalives -
|
||||
# see "man 7 tcp" for details
|
||||
|
||||
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
|
||||
# 0 selects the system default
|
||||
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
|
||||
# 0 selects the system default
|
||||
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
|
||||
# 0 selects the system default
|
||||
|
||||
# - Authentication -
|
||||
|
||||
#authentication_timeout = 1min # 1s-600s
|
||||
password_encryption = scram-sha-256 # md5 or scram-sha-256
|
||||
#db_user_namespace = off
|
||||
|
||||
# GSSAPI using Kerberos
|
||||
#krb_server_keyfile = ''
|
||||
#krb_caseins_users = off
|
||||
|
||||
# - SSL -
|
||||
|
||||
#ssl = off
|
||||
#ssl_ca_file = ''
|
||||
#ssl_cert_file = 'server.crt'
|
||||
#ssl_crl_file = ''
|
||||
#ssl_key_file = 'server.key'
|
||||
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
|
||||
#ssl_prefer_server_ciphers = on
|
||||
#ssl_ecdh_curve = 'prime256v1'
|
||||
#ssl_dh_params_file = ''
|
||||
#ssl_passphrase_command = ''
|
||||
#ssl_passphrase_command_supports_reload = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# RESOURCE USAGE (except WAL)
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Memory -
|
||||
|
||||
shared_buffers = 128MB # min 128kB
|
||||
# (change requires restart)
|
||||
#huge_pages = try # on, off, or try
|
||||
# (change requires restart)
|
||||
#temp_buffers = 8MB # min 800kB
|
||||
#max_prepared_transactions = 0 # zero disables the feature
|
||||
# (change requires restart)
|
||||
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
|
||||
# you actively intend to use prepared transactions.
|
||||
work_mem = 250MB # min 64kB
|
||||
maintenance_work_mem = 64MB
|
||||
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
|
||||
#max_stack_depth = 2MB # min 100kB
|
||||
dynamic_shared_memory_type = windows # the default is the first option
|
||||
# supported by the operating system:
|
||||
# posix
|
||||
# sysv
|
||||
# windows
|
||||
# mmap
|
||||
# use none to disable dynamic shared memory
|
||||
# (change requires restart)
|
||||
|
||||
# - Disk -
|
||||
|
||||
#temp_file_limit = -1 # limits per-process temp file space
|
||||
# in kB, or -1 for no limit
|
||||
|
||||
# - Kernel Resources -
|
||||
|
||||
#max_files_per_process = 1000 # min 25
|
||||
# (change requires restart)
|
||||
|
||||
# - Cost-Based Vacuum Delay -
|
||||
|
||||
#vacuum_cost_delay = 0 # 0-100 milliseconds
|
||||
#vacuum_cost_page_hit = 1 # 0-10000 credits
|
||||
#vacuum_cost_page_miss = 10 # 0-10000 credits
|
||||
#vacuum_cost_page_dirty = 20 # 0-10000 credits
|
||||
#vacuum_cost_limit = 200 # 1-10000 credits
|
||||
|
||||
# - Background Writer -
|
||||
|
||||
#bgwriter_delay = 200ms # 10-10000ms between rounds
|
||||
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
|
||||
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
|
||||
#bgwriter_flush_after = 0 # measured in pages, 0 disables
|
||||
|
||||
# - Asynchronous Behavior -
|
||||
|
||||
#effective_io_concurrency = 0 # 1-1000; 0 disables prefetching
|
||||
#max_worker_processes = 8 # (change requires restart)
|
||||
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
|
||||
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
|
||||
#parallel_leader_participation = on
|
||||
#max_parallel_workers = 8 # maximum number of max_worker_processes that
|
||||
# can be used in parallel operations
|
||||
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
|
||||
# (change requires restart)
|
||||
#backend_flush_after = 0 # measured in pages, 0 disables
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# WRITE-AHEAD LOG
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Settings -
|
||||
|
||||
wal_level = hot_standby
|
||||
# (change requires restart)
|
||||
#fsync = on # flush data to disk for crash safety
|
||||
# (turning this off can cause
|
||||
# unrecoverable data corruption)
|
||||
#synchronous_commit = on # synchronization level;
|
||||
# off, local, remote_write, remote_apply, or on
|
||||
#wal_sync_method = fsync # the default is the first option
|
||||
# supported by the operating system:
|
||||
# open_datasync
|
||||
# fdatasync (default on Linux)
|
||||
# fsync
|
||||
# fsync_writethrough
|
||||
# open_sync
|
||||
#full_page_writes = on # recover from partial page writes
|
||||
#wal_compression = off # enable compression of full-page writes
|
||||
#wal_log_hints = off # also do full page writes of non-critical updates
|
||||
# (change requires restart)
|
||||
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
|
||||
# (change requires restart)
|
||||
#wal_writer_delay = 200ms # 1-10000 milliseconds
|
||||
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
|
||||
|
||||
#commit_delay = 0 # range 0-100000, in microseconds
|
||||
#commit_siblings = 5 # range 1-1000
|
||||
|
||||
# - Checkpoints -
|
||||
|
||||
#checkpoint_timeout = 5min # range 30s-1d
|
||||
max_wal_size = 1GB
|
||||
min_wal_size = 80MB
|
||||
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
|
||||
#checkpoint_flush_after = 0 # measured in pages, 0 disables
|
||||
#checkpoint_warning = 30s # 0 disables
|
||||
|
||||
# - Archiving -
|
||||
|
||||
#archive_mode = off # enables archiving; off, on, or always
|
||||
# (change requires restart)
|
||||
#archive_command = '' # command to use to archive a logfile segment
|
||||
# placeholders: %p = path of file to archive
|
||||
# %f = file name only
|
||||
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
|
||||
#archive_timeout = 0 # force a logfile segment switch after this
|
||||
# number of seconds; 0 disables
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# REPLICATION
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Sending Servers -
|
||||
|
||||
# Set these on the master and on any standby that will send replication data.
|
||||
|
||||
max_wal_senders = 5
|
||||
# (change requires restart)
|
||||
wal_keep_segments = 32
|
||||
#wal_sender_timeout = 60s # in milliseconds; 0 disables
|
||||
|
||||
max_replication_slots = 5
|
||||
# (change requires restart)
|
||||
#track_commit_timestamp = off # collect timestamp of transaction commit
|
||||
# (change requires restart)
|
||||
|
||||
# - Master Server -
|
||||
|
||||
# These settings are ignored on a standby server.
|
||||
|
||||
#synchronous_standby_names = '' # standby servers that provide sync rep
|
||||
# method to choose sync standbys, number of sync standbys,
|
||||
# and comma-separated list of application_name
|
||||
# from standby(s); '*' = all
|
||||
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
|
||||
|
||||
# - Standby Servers -
|
||||
|
||||
# These settings are ignored on a master server.
|
||||
|
||||
#hot_standby = on # "off" disallows queries during recovery
|
||||
# (change requires restart)
|
||||
#max_standby_archive_delay = 30s # max delay before canceling queries
|
||||
# when reading WAL from archive;
|
||||
# -1 allows indefinite delay
|
||||
#max_standby_streaming_delay = 30s # max delay before canceling queries
|
||||
# when reading streaming WAL;
|
||||
# -1 allows indefinite delay
|
||||
#wal_receiver_status_interval = 10s # send replies at least this often
|
||||
# 0 disables
|
||||
#hot_standby_feedback = off # send info from standby to prevent
|
||||
# query conflicts
|
||||
#wal_receiver_timeout = 60s # time that receiver waits for
|
||||
# communication from master
|
||||
# in milliseconds; 0 disables
|
||||
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
|
||||
# retrieve WAL after a failed attempt
|
||||
|
||||
# - Subscribers -
|
||||
|
||||
# These settings are ignored on a publisher.
|
||||
|
||||
#max_logical_replication_workers = 4 # taken from max_worker_processes
|
||||
# (change requires restart)
|
||||
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# QUERY TUNING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Planner Method Configuration -
|
||||
|
||||
#enable_bitmapscan = on
|
||||
#enable_hashagg = on
|
||||
#enable_hashjoin = on
|
||||
#enable_indexscan = on
|
||||
#enable_indexonlyscan = on
|
||||
#enable_material = on
|
||||
#enable_mergejoin = on
|
||||
#enable_nestloop = on
|
||||
#enable_parallel_append = on
|
||||
#enable_seqscan = on
|
||||
#enable_sort = on
|
||||
#enable_tidscan = on
|
||||
#enable_partitionwise_join = off
|
||||
#enable_partitionwise_aggregate = off
|
||||
#enable_parallel_hash = on
|
||||
#enable_partition_pruning = on
|
||||
|
||||
# - Planner Cost Constants -
|
||||
|
||||
#seq_page_cost = 1.0 # measured on an arbitrary scale
|
||||
#random_page_cost = 4.0 # same scale as above
|
||||
#cpu_tuple_cost = 0.01 # same scale as above
|
||||
#cpu_index_tuple_cost = 0.005 # same scale as above
|
||||
#cpu_operator_cost = 0.0025 # same scale as above
|
||||
#parallel_tuple_cost = 0.1 # same scale as above
|
||||
#parallel_setup_cost = 1000.0 # same scale as above
|
||||
|
||||
#jit_above_cost = 100000 # perform JIT compilation if available
|
||||
# and query more expensive than this;
|
||||
# -1 disables
|
||||
#jit_inline_above_cost = 500000 # inline small functions if query is
|
||||
# more expensive than this; -1 disables
|
||||
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
|
||||
# query is more expensive than this;
|
||||
# -1 disables
|
||||
|
||||
#min_parallel_table_scan_size = 8MB
|
||||
#min_parallel_index_scan_size = 512kB
|
||||
#effective_cache_size = 4GB
|
||||
|
||||
# - Genetic Query Optimizer -
|
||||
|
||||
#geqo = on
|
||||
#geqo_threshold = 12
|
||||
#geqo_effort = 5 # range 1-10
|
||||
#geqo_pool_size = 0 # selects default based on effort
|
||||
#geqo_generations = 0 # selects default based on effort
|
||||
#geqo_selection_bias = 2.0 # range 1.5-2.0
|
||||
#geqo_seed = 0.0 # range 0.0-1.0
|
||||
|
||||
# - Other Planner Options -
|
||||
|
||||
#default_statistics_target = 100 # range 1-10000
|
||||
#constraint_exclusion = partition # on, off, or partition
|
||||
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
|
||||
#from_collapse_limit = 8
|
||||
#join_collapse_limit = 8 # 1 disables collapsing of explicit
|
||||
# JOIN clauses
|
||||
#force_parallel_mode = off
|
||||
#jit = off # allow JIT compilation
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# REPORTING AND LOGGING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Where to Log -
|
||||
|
||||
#log_destination = 'stderr' # Valid values are combinations of
|
||||
# stderr, csvlog, syslog, and eventlog,
|
||||
# depending on platform. csvlog
|
||||
# requires logging_collector to be on.
|
||||
|
||||
# This is used when logging to stderr:
|
||||
logging_collector = on
|
||||
# into log files. Required to be on for
|
||||
# csvlogs.
|
||||
# (change requires restart)
|
||||
|
||||
# These are only used if logging_collector is on:
|
||||
log_directory = 'C:/POSTGR~1/data/logs/pg11'
|
||||
# can be absolute or relative to PGDATA
|
||||
log_filename = 'pg11_%Y-%m-%d.log'
|
||||
# can include strftime() escapes
|
||||
#log_file_mode = 0600 # creation mode for log files,
|
||||
# begin with 0 to use octal notation
|
||||
log_truncate_on_rotation = on
|
||||
# same name as the new log file will be
|
||||
# truncated rather than appended to.
|
||||
# But such truncation only occurs on
|
||||
# time-driven rotation, not on restarts
|
||||
# or size-driven rotation. Default is
|
||||
# off, meaning append to existing files
|
||||
# in all cases.
|
||||
#log_rotation_age = 1d # Automatic rotation of logfiles will
|
||||
# happen after that time. 0 disables.
|
||||
log_rotation_size = 100MB # Automatic rotation of logfiles will
|
||||
# happen after that much log output.
|
||||
# 0 disables.
|
||||
|
||||
# These are relevant when logging to syslog:
|
||||
#syslog_facility = 'LOCAL0'
|
||||
#syslog_ident = 'postgres'
|
||||
#syslog_sequence_numbers = on
|
||||
#syslog_split_messages = on
|
||||
|
||||
# This is only relevant when logging to eventlog (win32):
|
||||
# (change requires restart)
|
||||
#event_source = 'PostgreSQL'
|
||||
|
||||
# - When to Log -
|
||||
|
||||
#client_min_messages = notice # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# log
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
|
||||
#log_min_messages = warning # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic
|
||||
|
||||
#log_min_error_statement = error # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic (effectively off)
|
||||
|
||||
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
|
||||
# and their durations, > 0 logs only
|
||||
# statements running at least this number
|
||||
# of milliseconds
|
||||
|
||||
|
||||
# - What to Log -
|
||||
|
||||
#debug_print_parse = off
|
||||
#debug_print_rewritten = off
|
||||
#debug_print_plan = off
|
||||
#debug_pretty_print = on
|
||||
log_checkpoints = on
|
||||
log_connections = on
|
||||
log_disconnections = on
|
||||
log_duration = on
|
||||
#log_error_verbosity = default # terse, default, or verbose messages
|
||||
log_hostname = on
|
||||
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h,remote=%r '
|
||||
# %a = application name
|
||||
# %u = user name
|
||||
# %d = database name
|
||||
# %r = remote host and port
|
||||
# %h = remote host
|
||||
# %p = process ID
|
||||
# %t = timestamp without milliseconds
|
||||
# %m = timestamp with milliseconds
|
||||
# %n = timestamp with milliseconds (as a Unix epoch)
|
||||
# %i = command tag
|
||||
# %e = SQL state
|
||||
# %c = session ID
|
||||
# %l = session line number
|
||||
# %s = session start timestamp
|
||||
# %v = virtual transaction ID
|
||||
# %x = transaction ID (0 if none)
|
||||
# %q = stop here in non-session
|
||||
# processes
|
||||
# %% = '%'
|
||||
# e.g. '<%u%%%d> '
|
||||
log_lock_waits = on
|
||||
log_statement = 'all' # none, ddl, mod, all
|
||||
log_replication_commands = on
|
||||
log_temp_files = 0
|
||||
# than the specified size in kilobytes;
|
||||
# -1 disables, 0 logs all temp files
|
||||
log_timezone = 'US/Eastern'
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# PROCESS TITLE
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#cluster_name = '' # added to process titles if nonempty
|
||||
# (change requires restart)
|
||||
update_process_title = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# STATISTICS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Query and Index Statistics Collector -
|
||||
|
||||
#track_activities = on
|
||||
#track_counts = on
|
||||
track_io_timing = on
|
||||
#track_functions = none # none, pl, all
|
||||
#track_activity_query_size = 1024 # (change requires restart)
|
||||
#stats_temp_directory = 'pg_stat_tmp'
|
||||
|
||||
|
||||
# - Monitoring -
|
||||
|
||||
#log_parser_stats = off
|
||||
#log_planner_stats = off
|
||||
#log_executor_stats = off
|
||||
#log_statement_stats = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# AUTOVACUUM
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#autovacuum = on # Enable autovacuum subprocess? 'on'
|
||||
# requires track_counts to also be on.
|
||||
log_autovacuum_min_duration = 0
|
||||
# their durations, > 0 logs only
|
||||
# actions running at least this number
|
||||
# of milliseconds.
|
||||
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
|
||||
# (change requires restart)
|
||||
#autovacuum_naptime = 1min # time between autovacuum runs
|
||||
#autovacuum_vacuum_threshold = 50 # min number of row updates before
|
||||
# vacuum
|
||||
#autovacuum_analyze_threshold = 50 # min number of row updates before
|
||||
# analyze
|
||||
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
|
||||
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
|
||||
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
|
||||
# (change requires restart)
|
||||
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
|
||||
# before forced vacuum
|
||||
# (change requires restart)
|
||||
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
|
||||
# autovacuum, in milliseconds;
|
||||
# -1 means use vacuum_cost_delay
|
||||
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
|
||||
# autovacuum, -1 means use
|
||||
# vacuum_cost_limit
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CLIENT CONNECTION DEFAULTS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Statement Behavior -
|
||||
|
||||
#search_path = '"$user", public' # schema names
|
||||
#row_security = on
|
||||
#default_tablespace = '' # a tablespace name, '' uses the default
|
||||
#temp_tablespaces = '' # a list of tablespace names, '' uses
|
||||
# only default tablespace
|
||||
#check_function_bodies = on
|
||||
#default_transaction_isolation = 'read committed'
|
||||
#default_transaction_read_only = off
|
||||
#default_transaction_deferrable = off
|
||||
#session_replication_role = 'origin'
|
||||
#statement_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#lock_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#vacuum_freeze_min_age = 50000000
|
||||
#vacuum_freeze_table_age = 150000000
|
||||
#vacuum_multixact_freeze_min_age = 5000000
|
||||
#vacuum_multixact_freeze_table_age = 150000000
|
||||
#vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples
|
||||
# before index cleanup, 0 always performs
|
||||
# index cleanup
|
||||
#bytea_output = 'hex' # hex, escape
|
||||
#xmlbinary = 'base64'
|
||||
#xmloption = 'content'
|
||||
#gin_fuzzy_search_limit = 0
|
||||
#gin_pending_list_limit = 4MB
|
||||
|
||||
# - Locale and Formatting -
|
||||
|
||||
datestyle = 'iso, mdy'
|
||||
#intervalstyle = 'postgres'
|
||||
timezone = 'US/Eastern'
|
||||
#timezone_abbreviations = 'Default' # Select the set of available time zone
|
||||
# abbreviations. Currently, there are
|
||||
# Default
|
||||
# Australia (historical usage)
|
||||
# India
|
||||
# You can create your own file in
|
||||
# share/timezonesets/.
|
||||
#extra_float_digits = 0 # min -15, max 3
|
||||
#client_encoding = sql_ascii # actually, defaults to database
|
||||
# encoding
|
||||
|
||||
# These settings are initialized by initdb, but they can be changed.
|
||||
lc_messages = 'C' # locale for system error message
|
||||
# strings
|
||||
lc_monetary = 'C' # locale for monetary formatting
|
||||
lc_numeric = 'C' # locale for number formatting
|
||||
lc_time = 'C' # locale for time formatting
|
||||
|
||||
# default configuration for text search
|
||||
default_text_search_config = 'pg_catalog.english'
|
||||
|
||||
# - Shared Library Preloading -
|
||||
|
||||
shared_preload_libraries = 'auto_explain' # (change requires restart)
|
||||
#local_preload_libraries = ''
|
||||
#session_preload_libraries = ''
|
||||
#jit_provider = 'llvmjit' # JIT library to use
|
||||
|
||||
# - Other Defaults -
|
||||
|
||||
#dynamic_library_path = '$libdir'
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# LOCK MANAGEMENT
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#deadlock_timeout = 1s
|
||||
#max_locks_per_transaction = 64 # min 10
|
||||
# (change requires restart)
|
||||
#max_pred_locks_per_transaction = 64 # min 10
|
||||
# (change requires restart)
|
||||
#max_pred_locks_per_relation = -2 # negative values mean
|
||||
# (max_pred_locks_per_transaction
|
||||
# / -max_pred_locks_per_relation) - 1
|
||||
#max_pred_locks_per_page = 2 # min 0
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# VERSION AND PLATFORM COMPATIBILITY
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Previous PostgreSQL Versions -
|
||||
|
||||
#array_nulls = on
|
||||
#backslash_quote = safe_encoding # on, off, or safe_encoding
|
||||
#default_with_oids = off
|
||||
#escape_string_warning = on
|
||||
#lo_compat_privileges = off
|
||||
#operator_precedence_warning = off
|
||||
#quote_all_identifiers = off
|
||||
#standard_conforming_strings = on
|
||||
#synchronize_seqscans = on
|
||||
|
||||
# - Other Platforms and Clients -
|
||||
|
||||
#transform_null_equals = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# ERROR HANDLING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#exit_on_error = off # terminate session on any error?
|
||||
#restart_after_crash = on # reinitialize after backend crash?
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CONFIG FILE INCLUDES
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# These options allow settings to be loaded from files other than the
|
||||
# default postgresql.conf.
|
||||
|
||||
#include_dir = 'conf.d' # include files ending in '.conf' from
|
||||
# directory 'conf.d'
|
||||
#include_if_exists = 'exists.conf' # include file only if it exists
|
||||
#include = 'special.conf' # include file
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CUSTOMIZED OPTIONS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# Add settings for extensions here
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# AUTO EXPLAIN
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
auto_explain.log_min_duration = '250ms'
|
||||
auto_explain.log_analyze = on
|
||||
auto_explain.log_buffers = on
|
||||
auto_explain.log_nested_statements = on
|
@ -1 +0,0 @@
|
||||
use -E to show definitions of SQL used for \d commands
|
@ -1,691 +0,0 @@
|
||||
# -----------------------------
|
||||
# PostgreSQL configuration file
|
||||
# -----------------------------
|
||||
#
|
||||
# This file consists of lines of the form:
|
||||
#
|
||||
# name = value
|
||||
#
|
||||
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
|
||||
# "#" anywhere on a line. The complete list of parameter names and allowed
|
||||
# values can be found in the PostgreSQL documentation.
|
||||
#
|
||||
# The commented-out settings shown in this file represent the default values.
|
||||
# Re-commenting a setting is NOT sufficient to revert it to the default value;
|
||||
# you need to reload the server.
|
||||
#
|
||||
# This file is read on server startup and when the server receives a SIGHUP
|
||||
# signal. If you edit the file on a running system, you have to SIGHUP the
|
||||
# server for the changes to take effect, run "pg_ctl reload", or execute
|
||||
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
|
||||
# require a server shutdown and restart to take effect.
|
||||
#
|
||||
# Any parameter can also be given as a command-line option to the server, e.g.,
|
||||
# "postgres -c log_connections=on". Some parameters can be changed at run time
|
||||
# with the "SET" SQL command.
|
||||
#
|
||||
# Memory units: kB = kilobytes Time units: ms = milliseconds
|
||||
# MB = megabytes s = seconds
|
||||
# GB = gigabytes min = minutes
|
||||
# TB = terabytes h = hours
|
||||
# d = days
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# FILE LOCATIONS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# The default values of these variables are driven from the -D command-line
|
||||
# option or PGDATA environment variable, represented here as ConfigDir.
|
||||
|
||||
data_directory = '/var/lib/postgresql/11/main' # use data in another directory
|
||||
# (change requires restart)
|
||||
hba_file = '/etc/postgresql/11/main/pg_hba.conf' # host-based authentication file
|
||||
# (change requires restart)
|
||||
ident_file = '/etc/postgresql/11/main/pg_ident.conf' # ident configuration file
|
||||
# (change requires restart)
|
||||
|
||||
# If external_pid_file is not explicitly set, no extra PID file is written.
|
||||
external_pid_file = '/var/run/postgresql/11-main.pid' # write an extra PID file
|
||||
# (change requires restart)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CONNECTIONS AND AUTHENTICATION
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Connection Settings -
|
||||
|
||||
listen_addresses = '*' # what IP address(es) to listen on;
|
||||
# comma-separated list of addresses;
|
||||
# defaults to 'localhost'; use '*' for all
|
||||
# (change requires restart)
|
||||
port = 5432 # (change requires restart)
|
||||
max_connections = 100 # (change requires restart)
|
||||
#superuser_reserved_connections = 3 # (change requires restart)
|
||||
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
|
||||
# (change requires restart)
|
||||
#unix_socket_group = '' # (change requires restart)
|
||||
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
|
||||
# (change requires restart)
|
||||
#bonjour = off # advertise server via Bonjour
|
||||
# (change requires restart)
|
||||
#bonjour_name = '' # defaults to the computer name
|
||||
# (change requires restart)
|
||||
|
||||
# - TCP Keepalives -
|
||||
# see "man 7 tcp" for details
|
||||
|
||||
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
|
||||
# 0 selects the system default
|
||||
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
|
||||
# 0 selects the system default
|
||||
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
|
||||
# 0 selects the system default
|
||||
|
||||
# - Authentication -
|
||||
|
||||
#authentication_timeout = 1min # 1s-600s
|
||||
password_encryption = scram-sha-256 # md5 or scram-sha-256
|
||||
#db_user_namespace = off
|
||||
|
||||
# GSSAPI using Kerberos
|
||||
#krb_server_keyfile = ''
|
||||
#krb_caseins_users = off
|
||||
|
||||
# - SSL -
|
||||
|
||||
ssl = off
|
||||
#ssl_ca_file = ''
|
||||
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
|
||||
#ssl_crl_file = ''
|
||||
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
|
||||
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
|
||||
#ssl_prefer_server_ciphers = on
|
||||
#ssl_ecdh_curve = 'prime256v1'
|
||||
#ssl_dh_params_file = ''
|
||||
#ssl_passphrase_command = ''
|
||||
#ssl_passphrase_command_supports_reload = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# RESOURCE USAGE (except WAL)
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Memory -
|
||||
|
||||
shared_buffers = 1000MB # min 128kB
|
||||
# (change requires restart)
|
||||
#huge_pages = try # on, off, or try
|
||||
# (change requires restart)
|
||||
#temp_buffers = 8MB # min 800kB
|
||||
#max_prepared_transactions = 0 # zero disables the feature
|
||||
# (change requires restart)
|
||||
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
|
||||
# you actively intend to use prepared transactions.
|
||||
work_mem = 500MB # min 64kB
|
||||
#maintenance_work_mem = 64MB # min 1MB
|
||||
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
|
||||
#max_stack_depth = 2MB # min 100kB
|
||||
dynamic_shared_memory_type = posix # the default is the first option
|
||||
# supported by the operating system:
|
||||
# posix
|
||||
# sysv
|
||||
# windows
|
||||
# mmap
|
||||
# use none to disable dynamic shared memory
|
||||
# (change requires restart)
|
||||
|
||||
# - Disk -
|
||||
|
||||
#temp_file_limit = -1 # limits per-process temp file space
|
||||
# in kB, or -1 for no limit
|
||||
|
||||
# - Kernel Resources -
|
||||
|
||||
#max_files_per_process = 1000 # min 25
|
||||
# (change requires restart)
|
||||
|
||||
# - Cost-Based Vacuum Delay -
|
||||
|
||||
#vacuum_cost_delay = 0 # 0-100 milliseconds
|
||||
#vacuum_cost_page_hit = 1 # 0-10000 credits
|
||||
#vacuum_cost_page_miss = 10 # 0-10000 credits
|
||||
#vacuum_cost_page_dirty = 20 # 0-10000 credits
|
||||
#vacuum_cost_limit = 200 # 1-10000 credits
|
||||
|
||||
# - Background Writer -
|
||||
|
||||
#bgwriter_delay = 200ms # 10-10000ms between rounds
|
||||
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
|
||||
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
|
||||
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
|
||||
|
||||
# - Asynchronous Behavior -
|
||||
|
||||
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
|
||||
#max_worker_processes = 8 # (change requires restart)
|
||||
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
|
||||
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
|
||||
#parallel_leader_participation = on
|
||||
#max_parallel_workers = 8 # maximum number of max_worker_processes that
|
||||
# can be used in parallel operations
|
||||
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
|
||||
# (change requires restart)
|
||||
#backend_flush_after = 0 # measured in pages, 0 disables
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# WRITE-AHEAD LOG
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Settings -
|
||||
|
||||
#wal_level = replica # minimal, replica, or logical
|
||||
# (change requires restart)
|
||||
#fsync = on # flush data to disk for crash safety
|
||||
# (turning this off can cause
|
||||
# unrecoverable data corruption)
|
||||
#synchronous_commit = on # synchronization level;
|
||||
# off, local, remote_write, remote_apply, or on
|
||||
#wal_sync_method = fsync # the default is the first option
|
||||
# supported by the operating system:
|
||||
# open_datasync
|
||||
# fdatasync (default on Linux)
|
||||
# fsync
|
||||
# fsync_writethrough
|
||||
# open_sync
|
||||
#full_page_writes = on # recover from partial page writes
|
||||
#wal_compression = off # enable compression of full-page writes
|
||||
#wal_log_hints = off # also do full page writes of non-critical updates
|
||||
# (change requires restart)
|
||||
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
|
||||
# (change requires restart)
|
||||
#wal_writer_delay = 200ms # 1-10000 milliseconds
|
||||
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
|
||||
|
||||
#commit_delay = 0 # range 0-100000, in microseconds
|
||||
#commit_siblings = 5 # range 1-1000
|
||||
|
||||
# - Checkpoints -
|
||||
|
||||
#checkpoint_timeout = 5min # range 30s-1d
|
||||
max_wal_size = 1GB
|
||||
min_wal_size = 80MB
|
||||
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
|
||||
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
|
||||
#checkpoint_warning = 30s # 0 disables
|
||||
|
||||
# - Archiving -
|
||||
|
||||
#archive_mode = off # enables archiving; off, on, or always
|
||||
# (change requires restart)
|
||||
#archive_command = '' # command to use to archive a logfile segment
|
||||
# placeholders: %p = path of file to archive
|
||||
# %f = file name only
|
||||
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
|
||||
#archive_timeout = 0 # force a logfile segment switch after this
|
||||
# number of seconds; 0 disables
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# REPLICATION
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Sending Servers -
|
||||
|
||||
# Set these on the master and on any standby that will send replication data.
|
||||
|
||||
#max_wal_senders = 10 # max number of walsender processes
|
||||
# (change requires restart)
|
||||
#wal_keep_segments = 0 # in logfile segments; 0 disables
|
||||
#wal_sender_timeout = 60s # in milliseconds; 0 disables
|
||||
|
||||
#max_replication_slots = 10 # max number of replication slots
|
||||
# (change requires restart)
|
||||
#track_commit_timestamp = off # collect timestamp of transaction commit
|
||||
# (change requires restart)
|
||||
|
||||
# - Master Server -
|
||||
|
||||
# These settings are ignored on a standby server.
|
||||
|
||||
#synchronous_standby_names = '' # standby servers that provide sync rep
|
||||
# method to choose sync standbys, number of sync standbys,
|
||||
# and comma-separated list of application_name
|
||||
# from standby(s); '*' = all
|
||||
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
|
||||
|
||||
# - Standby Servers -
|
||||
|
||||
# These settings are ignored on a master server.
|
||||
|
||||
#hot_standby = on # "off" disallows queries during recovery
|
||||
# (change requires restart)
|
||||
#max_standby_archive_delay = 30s # max delay before canceling queries
|
||||
# when reading WAL from archive;
|
||||
# -1 allows indefinite delay
|
||||
#max_standby_streaming_delay = 30s # max delay before canceling queries
|
||||
# when reading streaming WAL;
|
||||
# -1 allows indefinite delay
|
||||
#wal_receiver_status_interval = 10s # send replies at least this often
|
||||
# 0 disables
|
||||
#hot_standby_feedback = off # send info from standby to prevent
|
||||
# query conflicts
|
||||
#wal_receiver_timeout = 60s # time that receiver waits for
|
||||
# communication from master
|
||||
# in milliseconds; 0 disables
|
||||
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
|
||||
# retrieve WAL after a failed attempt
|
||||
|
||||
# - Subscribers -
|
||||
|
||||
# These settings are ignored on a publisher.
|
||||
|
||||
#max_logical_replication_workers = 4 # taken from max_worker_processes
|
||||
# (change requires restart)
|
||||
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# QUERY TUNING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Planner Method Configuration -
|
||||
|
||||
#enable_bitmapscan = on
|
||||
#enable_hashagg = on
|
||||
#enable_hashjoin = on
|
||||
#enable_indexscan = on
|
||||
#enable_indexonlyscan = on
|
||||
#enable_material = on
|
||||
#enable_mergejoin = on
|
||||
#enable_nestloop = on
|
||||
#enable_parallel_append = on
|
||||
#enable_seqscan = on
|
||||
#enable_sort = on
|
||||
#enable_tidscan = on
|
||||
#enable_partitionwise_join = off
|
||||
#enable_partitionwise_aggregate = off
|
||||
#enable_parallel_hash = on
|
||||
#enable_partition_pruning = on
|
||||
|
||||
# - Planner Cost Constants -
|
||||
|
||||
#seq_page_cost = 1.0 # measured on an arbitrary scale
|
||||
#random_page_cost = 4.0 # same scale as above
|
||||
#cpu_tuple_cost = 0.01 # same scale as above
|
||||
#cpu_index_tuple_cost = 0.005 # same scale as above
|
||||
#cpu_operator_cost = 0.0025 # same scale as above
|
||||
#parallel_tuple_cost = 0.1 # same scale as above
|
||||
#parallel_setup_cost = 1000.0 # same scale as above
|
||||
|
||||
#jit_above_cost = 100000 # perform JIT compilation if available
|
||||
# and query more expensive than this;
|
||||
# -1 disables
|
||||
#jit_inline_above_cost = 500000 # inline small functions if query is
|
||||
# more expensive than this; -1 disables
|
||||
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
|
||||
# query is more expensive than this;
|
||||
# -1 disables
|
||||
|
||||
#min_parallel_table_scan_size = 8MB
|
||||
#min_parallel_index_scan_size = 512kB
|
||||
#effective_cache_size = 4GB
|
||||
|
||||
# - Genetic Query Optimizer -
|
||||
|
||||
#geqo = on
|
||||
#geqo_threshold = 12
|
||||
#geqo_effort = 5 # range 1-10
|
||||
#geqo_pool_size = 0 # selects default based on effort
|
||||
#geqo_generations = 0 # selects default based on effort
|
||||
#geqo_selection_bias = 2.0 # range 1.5-2.0
|
||||
#geqo_seed = 0.0 # range 0.0-1.0
|
||||
|
||||
# - Other Planner Options -
|
||||
|
||||
#default_statistics_target = 100 # range 1-10000
|
||||
#constraint_exclusion = partition # on, off, or partition
|
||||
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
|
||||
#from_collapse_limit = 8
|
||||
#join_collapse_limit = 8 # 1 disables collapsing of explicit
|
||||
# JOIN clauses
|
||||
#force_parallel_mode = off
|
||||
#jit = off # allow JIT compilation
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# REPORTING AND LOGGING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Where to Log -
|
||||
|
||||
#log_destination = 'stderr' # Valid values are combinations of
|
||||
# stderr, csvlog, syslog, and eventlog,
|
||||
# depending on platform. csvlog
|
||||
# requires logging_collector to be on.
|
||||
|
||||
# This is used when logging to stderr:
|
||||
logging_collector = on # Enable capturing of stderr and csvlog
|
||||
# into log files. Required to be on for
|
||||
# csvlogs.
|
||||
# (change requires restart)
|
||||
|
||||
# These are only used if logging_collector is on:
|
||||
#log_directory = 'log' # directory where log files are written,
|
||||
# can be absolute or relative to PGDATA
|
||||
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
|
||||
# can include strftime() escapes
|
||||
#log_file_mode = 0600 # creation mode for log files,
|
||||
# begin with 0 to use octal notation
|
||||
#log_truncate_on_rotation = off # If on, an existing log file with the
|
||||
# same name as the new log file will be
|
||||
# truncated rather than appended to.
|
||||
# But such truncation only occurs on
|
||||
# time-driven rotation, not on restarts
|
||||
# or size-driven rotation. Default is
|
||||
# off, meaning append to existing files
|
||||
# in all cases.
|
||||
log_rotation_age = 1d # Automatic rotation of logfiles will
|
||||
# happen after that time. 0 disables.
|
||||
log_rotation_size = 1000MB # Automatic rotation of logfiles will
|
||||
# happen after that much log output.
|
||||
# 0 disables.
|
||||
|
||||
# These are relevant when logging to syslog:
|
||||
#syslog_facility = 'LOCAL0'
|
||||
#syslog_ident = 'postgres'
|
||||
#syslog_sequence_numbers = on
|
||||
#syslog_split_messages = on
|
||||
|
||||
# This is only relevant when logging to eventlog (win32):
|
||||
# (change requires restart)
|
||||
#event_source = 'PostgreSQL'
|
||||
|
||||
# - When to Log -
|
||||
|
||||
#log_min_messages = warning # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic
|
||||
|
||||
#log_min_error_statement = error # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic (effectively off)
|
||||
|
||||
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
|
||||
# and their durations, > 0 logs only
|
||||
# statements running at least this number
|
||||
# of milliseconds
|
||||
|
||||
|
||||
# - What to Log -
|
||||
|
||||
#debug_print_parse = off
|
||||
#debug_print_rewritten = off
|
||||
#debug_print_plan = off
|
||||
#debug_pretty_print = on
|
||||
log_checkpoints = on
|
||||
log_connections = on
|
||||
log_disconnections = on
|
||||
log_duration = on
|
||||
#log_error_verbosity = default # terse, default, or verbose messages
|
||||
log_hostname = on
|
||||
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h ' # special values:
|
||||
# %a = application name
|
||||
# %u = user name
|
||||
# %d = database name
|
||||
# %r = remote host and port
|
||||
# %h = remote host
|
||||
# %p = process ID
|
||||
# %t = timestamp without milliseconds
|
||||
# %m = timestamp with milliseconds
|
||||
# %n = timestamp with milliseconds (as a Unix epoch)
|
||||
# %i = command tag
|
||||
# %e = SQL state
|
||||
# %c = session ID
|
||||
# %l = session line number
|
||||
# %s = session start timestamp
|
||||
# %v = virtual transaction ID
|
||||
# %x = transaction ID (0 if none)
|
||||
# %q = stop here in non-session
|
||||
# processes
|
||||
# %% = '%'
|
||||
# e.g. '<%u%%%d> '
|
||||
log_lock_waits = on # log lock waits >= deadlock_timeout
|
||||
log_statement = 'all' # none, ddl, mod, all
|
||||
log_replication_commands = on
|
||||
log_temp_files = 0 # log temporary files equal or larger
|
||||
# than the specified size in kilobytes;
|
||||
# -1 disables, 0 logs all temp files
|
||||
log_timezone = 'US/Eastern'
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# PROCESS TITLE
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
cluster_name = '11/main' # added to process titles if nonempty
|
||||
# (change requires restart)
|
||||
#update_process_title = on
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# STATISTICS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Query and Index Statistics Collector -
|
||||
|
||||
#track_activities = on
|
||||
#track_counts = on
|
||||
track_io_timing = on
|
||||
track_functions = all # none, pl, all
|
||||
#track_activity_query_size = 1024 # (change requires restart)
|
||||
stats_temp_directory = '/var/run/postgresql/11-main.pg_stat_tmp'
|
||||
|
||||
|
||||
# - Monitoring -
|
||||
|
||||
#log_parser_stats = off
|
||||
#log_planner_stats = off
|
||||
#log_executor_stats = off
|
||||
#log_statement_stats = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# AUTOVACUUM
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#autovacuum = on # Enable autovacuum subprocess? 'on'
|
||||
# requires track_counts to also be on.
|
||||
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and
|
||||
# their durations, > 0 logs only
|
||||
# actions running at least this number
|
||||
# of milliseconds.
|
||||
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
|
||||
# (change requires restart)
|
||||
#autovacuum_naptime = 1min # time between autovacuum runs
|
||||
#autovacuum_vacuum_threshold = 50 # min number of row updates before
|
||||
# vacuum
|
||||
#autovacuum_analyze_threshold = 50 # min number of row updates before
|
||||
# analyze
|
||||
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
|
||||
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
|
||||
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
|
||||
# (change requires restart)
|
||||
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
|
||||
# before forced vacuum
|
||||
# (change requires restart)
|
||||
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
|
||||
# autovacuum, in milliseconds;
|
||||
# -1 means use vacuum_cost_delay
|
||||
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
|
||||
# autovacuum, -1 means use
|
||||
# vacuum_cost_limit
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CLIENT CONNECTION DEFAULTS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Statement Behavior -
|
||||
|
||||
#client_min_messages = notice # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# log
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
#search_path = '"$user", public' # schema names
|
||||
#row_security = on
|
||||
#default_tablespace = '' # a tablespace name, '' uses the default
|
||||
#temp_tablespaces = '' # a list of tablespace names, '' uses
|
||||
# only default tablespace
|
||||
#check_function_bodies = on
|
||||
#default_transaction_isolation = 'read committed'
|
||||
#default_transaction_read_only = off
|
||||
#default_transaction_deferrable = off
|
||||
#session_replication_role = 'origin'
|
||||
#statement_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#lock_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#vacuum_freeze_min_age = 50000000
|
||||
#vacuum_freeze_table_age = 150000000
|
||||
#vacuum_multixact_freeze_min_age = 5000000
|
||||
#vacuum_multixact_freeze_table_age = 150000000
|
||||
#vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples
|
||||
# before index cleanup, 0 always performs
|
||||
# index cleanup
|
||||
#bytea_output = 'hex' # hex, escape
|
||||
#xmlbinary = 'base64'
|
||||
#xmloption = 'content'
|
||||
#gin_fuzzy_search_limit = 0
|
||||
#gin_pending_list_limit = 4MB
|
||||
|
||||
# - Locale and Formatting -
|
||||
|
||||
datestyle = 'iso, mdy'
|
||||
#intervalstyle = 'postgres'
|
||||
timezone = 'US/Eastern'
|
||||
#timezone_abbreviations = 'Default' # Select the set of available time zone
|
||||
# abbreviations. Currently, there are
|
||||
# Default
|
||||
# Australia (historical usage)
|
||||
# India
|
||||
# You can create your own file in
|
||||
# share/timezonesets/.
|
||||
#extra_float_digits = 0 # min -15, max 3
|
||||
#client_encoding = sql_ascii # actually, defaults to database
|
||||
# encoding
|
||||
|
||||
# These settings are initialized by initdb, but they can be changed.
|
||||
lc_messages = 'en_US.UTF-8' # locale for system error message
|
||||
# strings
|
||||
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
|
||||
lc_numeric = 'en_US.UTF-8' # locale for number formatting
|
||||
lc_time = 'en_US.UTF-8' # locale for time formatting
|
||||
|
||||
# default configuration for text search
|
||||
default_text_search_config = 'pg_catalog.english'
|
||||
|
||||
# - Shared Library Preloading -
|
||||
|
||||
shared_preload_libraries = 'auto_explain' # (change requires restart)
|
||||
#local_preload_libraries = ''
|
||||
#session_preload_libraries = ''
|
||||
#jit_provider = 'llvmjit' # JIT library to use
|
||||
|
||||
# - Other Defaults -
|
||||
|
||||
#dynamic_library_path = '$libdir'
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# LOCK MANAGEMENT
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#deadlock_timeout = 1s
|
||||
#max_locks_per_transaction = 64 # min 10
|
||||
# (change requires restart)
|
||||
#max_pred_locks_per_transaction = 64 # min 10
|
||||
# (change requires restart)
|
||||
#max_pred_locks_per_relation = -2 # negative values mean
|
||||
# (max_pred_locks_per_transaction
|
||||
# / -max_pred_locks_per_relation) - 1
|
||||
#max_pred_locks_per_page = 2 # min 0
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# VERSION AND PLATFORM COMPATIBILITY
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Previous PostgreSQL Versions -
|
||||
|
||||
#array_nulls = on
|
||||
#backslash_quote = safe_encoding # on, off, or safe_encoding
|
||||
#default_with_oids = off
|
||||
#escape_string_warning = on
|
||||
#lo_compat_privileges = off
|
||||
#operator_precedence_warning = off
|
||||
#quote_all_identifiers = off
|
||||
#standard_conforming_strings = on
|
||||
#synchronize_seqscans = on
|
||||
|
||||
# - Other Platforms and Clients -
|
||||
|
||||
#transform_null_equals = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# ERROR HANDLING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#exit_on_error = off # terminate session on any error?
|
||||
#restart_after_crash = on # reinitialize after backend crash?
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CONFIG FILE INCLUDES
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# These options allow settings to be loaded from files other than the
|
||||
# default postgresql.conf.
|
||||
|
||||
include_dir = 'conf.d' # include files ending in '.conf' from
|
||||
# directory 'conf.d'
|
||||
#include_if_exists = 'exists.conf' # include file only if it exists
|
||||
#include = 'special.conf' # include file
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CUSTOMIZED OPTIONS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# Add settings for extensions here
|
||||
auto_explain.log_min_duration = 1000ms
|
||||
auto_explain.log_analyze = on
|
||||
auto_explain.log_buffers = on
|
||||
auto_explain.log_nested_statements = on
|
@ -1,39 +0,0 @@
|
||||
DROP USER IF EXISTS api;
|
||||
|
||||
SET password_encryption = 'scram-sha-256';
|
||||
|
||||
CREATE ROLE api WITH
|
||||
LOGIN
|
||||
NOSUPERUSER
|
||||
NOCREATEDB
|
||||
NOCREATEROLE
|
||||
INHERIT
|
||||
NOREPLICATION
|
||||
CONNECTION LIMIT -1
|
||||
PASSWORD 'api';
|
||||
|
||||
--------------------grant--------------------------------------------------
|
||||
|
||||
GRANT USAGE ON SCHEMA lgdat TO api;
|
||||
|
||||
GRANT SELECT /*, UPDATE, INSERT, DELETE*/ ON ALL TABLES IN SCHEMA lgdat TO api;
|
||||
|
||||
GRANT USAGE ON ALL SEQUENCES IN SCHEMA lgdat TO api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat GRANT SELECT/*, UPDATE, INSERT, DELETE*/ ON TABLES TO api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat GRANT USAGE ON SEQUENCES TO api;
|
||||
|
||||
---------------------------revoke---------------------------------------
|
||||
|
||||
REVOKE USAGE ON SCHEMA lgdat FROM api;
|
||||
|
||||
REVOKE USAGE ON SCHEMA lgdat FROM api;
|
||||
|
||||
REVOKE SELECT , UPDATE, INSERT, DELETE ON ALL TABLES IN SCHEMA lgdat FROM api;
|
||||
|
||||
REVOKE USAGE ON ALL SEQUENCES IN SCHEMA lgdat FROM api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat REVOKE SELECT, UPDATE, INSERT, DELETE ON TABLES FROM api;
|
||||
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA lgdat REVOKE USAGE ON SEQUENCES FROM api;
|
6
pscp.md
6
pscp.md
@ -1,4 +1,4 @@
|
||||
pscp.exe is a part of putty and can be used to transfer files through ssh
|
||||
|
||||
example:
|
||||
pscp.exe is a part of putty and can be used to transfer files through ssh
|
||||
|
||||
example:
|
||||
pscp.exe -pw ******** ptrowbridge@usmidlnx01:/home/ptrowbridge/pt_share/*.backup "C:\Users\PTrowbridge\OneDrive - The HC Companies, Inc\Backups"
|
5
python_data.md
Normal file
5
python_data.md
Normal file
@ -0,0 +1,5 @@
|
||||
tutorial on python for working with data:
|
||||
https://www.youtube.com/watch?v=r-uOLxNrNk8&feature=youtu.be
|
||||
|
||||
the littlest jupyter
|
||||
http://tljh.jupyter.org/en/latest/install/custom-server.html
|
49
r.md
Normal file
49
r.md
Normal file
@ -0,0 +1,49 @@
|
||||
installation
|
||||
---------------------------------------
|
||||
* to install R on ubuntu can to [r download page](https://cran.r-project.org/)
|
||||
* there are instruction on what to add to sources.list.
|
||||
* After doing apt-get update, will probably need to add the public key which is addressed [here](https://askubuntu.com/questions/13065/how-do-i-fix-the-gpg-error-no-pubkey#15272)
|
||||
* then do `sudo apt-get install r-base`
|
||||
|
||||
|
||||
using grid.arrange
|
||||
https://cran.r-project.org/web/packages/gridExtra/vignettes/arrangeGrob.html
|
||||
|
||||
set and mirror axis limits:
|
||||
```
|
||||
scale_y_continuous(
|
||||
breaks=seq(glob$PriceMin, glob$PriceMax, round(glob$StdDev * .5,2)),
|
||||
limits = c(glob$PriceMin, glob$PriceMax)
|
||||
) +
|
||||
```
|
||||
|
||||
how to loop through rows of a column
|
||||
```
|
||||
for (i in dim1) {
|
||||
for (j in i) {
|
||||
print(j);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
build a list of plots and use grid.arrange
|
||||
```
|
||||
do.call(grid.arrange,plot_list)
|
||||
```
|
||||
|
||||
re-sort a dataframe and print each row of a column
|
||||
```
|
||||
dim1 <- dim1[order(dim1$list),];
|
||||
for (i in dim1) {
|
||||
for (j in i) {
|
||||
print(j);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
to run a script from the command line
|
||||
`R --vanilla < scriptfile.R`
|
||||
|
||||
listing of R colors:
|
||||
http://www.endmemo.com/r/color.php
|
||||
|
53
regex.md
Normal file
53
regex.md
Normal file
@ -0,0 +1,53 @@
|
||||
[https://cheatography.com/davechild/cheat-sheets/regular-expressions/](https://cheatography.com/davechild/cheat-sheets/regular-expressions/)
|
||||
|
||||
```
|
||||
Anchors Quantifiers Groups and Ranges
|
||||
|
||||
^ Start of string, or start of line in multi-line pattern * 0 or more {3} Exactly 3 . Any character except new line (\n)
|
||||
\A Start of string + 1 or more {3,} 3 or more (a|b) a or b
|
||||
$ End of string, or end of line in multi-line pattern ? 0 or 1 {3,5} 3, 4 or 5 (...) Group
|
||||
\Z End of string Add a ? to a quantifier to make it ungreedy. (?:...) Passive (non-capturing) group
|
||||
\b Word boundary [abc] Range (a or b or c)
|
||||
\B Not word boundary Escape Sequences [^abc] Not (a or b or c)
|
||||
\< Start of word [a-q] Lower case letter from a to q
|
||||
\> End of word \ Escape following character [A-Q] Upper case letter from A to Q
|
||||
\Q Begin literal sequence [0-7] Digit from 0 to 7
|
||||
Character Classes \E End literal sequence \x Group/subpattern number "x"
|
||||
Ranges are inclusive.
|
||||
\c Control character
|
||||
\s White space Pattern Modifiers
|
||||
\S Not white space Common Metacharacters
|
||||
\d Digit g Global match
|
||||
\D Not digit ^ [ . $ i * Case-insensitive
|
||||
\w Word { * ( \ m * Multiple lines
|
||||
\W Not word + ) | ? s * Treat string as single line
|
||||
\x Hexadecimal digit < > x * Allow comments and whitespace in pattern
|
||||
\O Octal digit The escape character is usually \ e * Evaluate replacement
|
||||
U * Ungreedy pattern
|
||||
POSIX Special Characters * PCRE modifier
|
||||
|
||||
[:upper:] Upper case letters \n New line String Replacement
|
||||
[:lower:] Lower case letters \r Carriage return
|
||||
[:alpha:] All letters \t Tab $n nth non-passive group
|
||||
[:alnum:] Digits and letters \v Vertical tab $2 "xyz" in /^(abc(xyz))$/
|
||||
[:digit:] Digits \f Form feed $1 "xyz" in /^(?:abc)(xyz)$/
|
||||
[:xdigit:] Hexadecimal digits \xxx Octal character xxx $` Before matched string
|
||||
[:punct:] Punctuation \xhh Hex character hh $' After matched string
|
||||
[:blank:] Space and tab $+ Last matched string
|
||||
[:space:] Blank characters $& Entire matched string
|
||||
[:cntrl:] Control characters Some regex implementations use \ instead of $.
|
||||
[:graph:] Printed characters
|
||||
[:print:] Printed characters and spaces
|
||||
[:word:] Digits, letters and underscore
|
||||
|
||||
Assertions
|
||||
|
||||
?= Lookahead assertion
|
||||
?! Negative lookahead
|
||||
?<= Lookbehind assertion
|
||||
?!= or ?<! Negative lookbehind
|
||||
?> Once-only Subexpression
|
||||
?() Condition [if then]
|
||||
?()| Condition [if then else]
|
||||
?# Comment
|
||||
```
|
15
rsync.md
Normal file
15
rsync.md
Normal file
@ -0,0 +1,15 @@
|
||||
|
||||
sync everything in //mnt/backup and push to hptrow:
|
||||
```
|
||||
rsync -azv -e ssh -r /mnt/backup/ pt@hptrow.me:/mnt/backup/hc
|
||||
```
|
||||
|
||||
use the --delete flag to force a whole copy instead of incremental backup
|
||||
```
|
||||
rsync -azv --delete -e ssh -r /mnt/backup/ pt@hptrow.me:/mnt/backup/hc
|
||||
```
|
||||
|
||||
copy remote file to local machine
|
||||
```
|
||||
rsync -azv -e ssh ptrowbridge@usmidsap01://home/ptrowbridge/debug.sql //mnt/c/Users/PTrowbridge/Downloads/debug.sql
|
||||
```
|
4
sc.md
Normal file
4
sc.md
Normal file
@ -0,0 +1,4 @@
|
||||
to open a csv file in sc
|
||||
`cat file.csv | psc -k -d, | sc`
|
||||
|
||||
sc-im supposedly does a better job with csv, but have not been able to get it to compile
|
@ -1 +1,6 @@
|
||||
* https://runyourown.social/
|
||||
* https://runyourown.social/
|
||||
|
||||
mail options:
|
||||
iredmail
|
||||
mailinabox
|
||||
mailcow
|
||||
|
@ -1,33 +1,39 @@
|
||||
CREATE PROC RLARP.TEST AS
|
||||
|
||||
BEGIN
|
||||
PRINT 'Hi'; --non-erroring statement
|
||||
create table #temp(x varchar(255)); --create a permanent object to call outside block after error
|
||||
insert into #temp select 1/0;
|
||||
insert into #temp select 'hi'; --fill it after error
|
||||
--select * from #temp; --select it after error
|
||||
PRINT ERROR_MESSAGE(); --error message is gone
|
||||
|
||||
END;
|
||||
|
||||
begin transaction x
|
||||
declare @e int;
|
||||
DECLARE @em varchar(max);
|
||||
begin try
|
||||
EXEC RLARP.TEST;
|
||||
end TRY
|
||||
begin CATCH
|
||||
select @e = ERROR_NUMBER(), @em = ERROR_MESSAGE();
|
||||
if @e <> 0
|
||||
BEGIN
|
||||
rollback transaction x;
|
||||
print @em;
|
||||
END
|
||||
if @e = 0
|
||||
BEGIN
|
||||
commit transaction x;
|
||||
print 'ok';
|
||||
end
|
||||
end catch
|
||||
|
||||
SELECT * FROM #temp
|
||||
Error Handling
|
||||
===============================================================
|
||||
|
||||
```
|
||||
CREATE PROC RLARP.TEST AS
|
||||
|
||||
BEGIN
|
||||
PRINT 'Hi'; --non-erroring statement
|
||||
create table #temp(x varchar(255)); --create a permanent object to call outside block after error
|
||||
insert into #temp select 1/0;
|
||||
insert into #temp select 'hi'; --fill it after error
|
||||
--select * from #temp; --select it after error
|
||||
PRINT ERROR_MESSAGE(); --error message is gone
|
||||
|
||||
END;
|
||||
|
||||
begin transaction x
|
||||
declare @e int;
|
||||
DECLARE @em varchar(max);
|
||||
begin try
|
||||
EXEC RLARP.TEST;
|
||||
end TRY
|
||||
begin CATCH
|
||||
select @e = ERROR_NUMBER(), @em = ERROR_MESSAGE();
|
||||
if @e <> 0
|
||||
BEGIN
|
||||
rollback transaction x;
|
||||
print @em;
|
||||
END
|
||||
if @e = 0
|
||||
BEGIN
|
||||
commit transaction x;
|
||||
print 'ok';
|
||||
end
|
||||
end catch
|
||||
|
||||
SELECT * FROM #temp
|
||||
|
||||
```
|
14
sql_server/mssql_csv.ps1
Normal file
14
sql_server/mssql_csv.ps1
Normal file
@ -0,0 +1,14 @@
|
||||
# Check if the SqlServer module is installed
|
||||
if (-not (Get-Module -Name SqlServer -ListAvailable)) {
|
||||
# If not installed, install the SqlServer module
|
||||
Install-Module -Name SqlServer -Force
|
||||
}
|
||||
#import module
|
||||
Import-Module -Name SqlServer
|
||||
|
||||
# Define variables for SQL command and destination file path
|
||||
$SqlQuery = "SELECT top 1000 * FROM rlarp.osm_stack WHERE version = 'Actual'"
|
||||
$DestinationFilePath = "C:\Users\ptrowbridge\Downloads\osm_stack.csv"
|
||||
|
||||
# Execute the SQL query and export to CSV
|
||||
Invoke-Sqlcmd -ServerInstance "usmidsql01" -Database "fanalysis" -Query $SqlQuery -TrustServerCertificate | Export-Csv -Path $DestinationFilePath -NoTypeInformation
|
10
sr.ht.md
10
sr.ht.md
@ -1,5 +1,5 @@
|
||||
invite link
|
||||
https://meta.sr.ht/register/K8XW9Hyl86fdL0f925ertqEv
|
||||
|
||||
|
||||
must have public key (ssh-keygen) upoaded to your account for git pushing
|
||||
invite link
|
||||
https://meta.sr.ht/register/K8XW9Hyl86fdL0f925ertqEv
|
||||
|
||||
|
||||
must have public key (ssh-keygen) upoaded to your account for git pushing
|
||||
|
59
ssh.md
Normal file
59
ssh.md
Normal file
@ -0,0 +1,59 @@
|
||||
SSH keys are generated by the openSSH program and are usually stored in `.ssh` folder of the home directory for the user.
|
||||
* Windows: `C:\Users\PTrowbridge\.ssh`
|
||||
* NIX `//home/pt/.ssh`
|
||||
|
||||
If there is nothing there you can create keys by doing `ssh-keygen` like so:
|
||||
```
|
||||
sshdemo@USHCC10107:~$ ssh-keygen
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/sshdemo/.ssh/id_rsa):
|
||||
Created directory '/home/sshdemo/.ssh'.
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/sshdemo/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/sshdemo/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
SHA256:DpV1961Dec5/vmASwdu8eYPp1UXi4QOku6LJeVoSz3o sshdemo@USHCC10107
|
||||
The key's randomart image is:
|
||||
+---[RSA 2048]----+
|
||||
| . o . |
|
||||
| o.+ . o.|
|
||||
| o .o. = =|
|
||||
| . .== O |
|
||||
| o S .o o* +|
|
||||
| * .. =o+|
|
||||
| . * .. B ++|
|
||||
| . BE. + +.o|
|
||||
| B+ . .o|
|
||||
+----[SHA256]-----+
|
||||
sshdemo@USHCC10107:~$
|
||||
```
|
||||
|
||||
if you chose to use a passphrase you will have to enter the passphrase once whenever you (login/boot?)
|
||||
|
||||
Here's the folder and files it created:
|
||||
```
|
||||
sshdemo@USHCC10107:~$ cd .ssh/
|
||||
sshdemo@USHCC10107:~/.ssh$ ll
|
||||
total 4
|
||||
drwx------ 1 sshdemo sshdemo 512 May 14 15:38 ./
|
||||
drwxr-xr-x 1 sshdemo sshdemo 512 May 14 15:38 ../
|
||||
-rw------- 1 sshdemo sshdemo 1766 May 14 15:38 id_rsa
|
||||
-rw-r--r-- 1 sshdemo sshdemo 400 May 14 15:38 id_rsa.pub
|
||||
sshdemo@USHCC10107:~/.ssh$
|
||||
```
|
||||
|
||||
the `id_rsa.pub` file is the public key.
|
||||
if you copy the contents to your profile on a git server, you can use ssh to connect instead of having to use user/pass with http.
|
||||
usually you log into the website and go to setting for your profile.
|
||||
|
||||
after loading the public key to your public profile can clone the repo using ssh.
|
||||
to target an alternate port you will have to manually do a remote add:
|
||||
|
||||
`git clone ssh://git@gitea.hptrow.me:port_num_here/pt/notes`
|
||||
|
||||
or if the repo is already setup you can:
|
||||
|
||||
`git remote add hptrow ssh://git@gitea.hptrow.me:port_num_here/pt/notes`
|
||||
|
||||
now you can `git push` without any password prompt
|
55
tds.md
Normal file
55
tds.md
Normal file
@ -0,0 +1,55 @@
|
||||
install tds on ubuntu to connect to mssql from pgsql
|
||||
|
||||
https://github.com/tds-fdw/tds_fdw/blob/master/InstallUbuntu.md
|
||||
|
||||
copy and build tds_fdw:
|
||||
|
||||
```
|
||||
export TDS_FDW_VERSION="2.0.3"
|
||||
sudo apt-get install wget
|
||||
wget https://github.com/tds-fdw/tds_fdw/archive/v${TDS_FDW_VERSION}.tar.gz
|
||||
tar -xvzf v${TDS_FDW_VERSION}.tar.gz
|
||||
cd tds_fdw-${TDS_FDW_VERSION}/
|
||||
sudo chown ptrowbridge:ptrowbridge -R "tds_fdw-${TDS_FDW_VERSION}/"
|
||||
make USE_PGXS=1
|
||||
sudo make USE_PGXS=1 install
|
||||
```
|
||||
|
||||
create extension in postgres:
|
||||
`CREATE EXTENSION tds_fdw;`
|
||||
|
||||
create foreign server:
|
||||
```
|
||||
CREATE SERVER usmidsql01 FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername 'usmidsql01', port '1433', database 'fanalysis', tds_version '7.1');
|
||||
```
|
||||
|
||||
create user mapping:
|
||||
```
|
||||
CREATE USER MAPPING FOR ptrowbridge SERVER usmidsql01 OPTIONS (username 'Pricing', password '');
|
||||
```
|
||||
|
||||
to extract the schema into a single table that describes the schema do:
|
||||
```
|
||||
IMPORT FOREIGN SCHEMA dbo FROM SERVER usmidsql01 INTO pricequote_dbo;
|
||||
```
|
||||
and this will create a table call pricequote_dbo."UNCONTRAINED_COLUMNS"
|
||||
|
||||
create foreign table:
|
||||
```
|
||||
CREATE FOREIGN TABLE pricequote.pl (
|
||||
quote integer
|
||||
,billto text
|
||||
,shipto text
|
||||
,cdate timestamp
|
||||
,value numeric(18,9)
|
||||
,title text
|
||||
,descr text
|
||||
,comment text
|
||||
,url text
|
||||
,srce text
|
||||
)
|
||||
SERVER usmidsql01 OPTIONS (table_name 'fanalysis.rlarp.pl')
|
||||
```
|
||||
|
||||
to link in fanalysisp
|
||||
CREATE SERVER usmidsql01_fanalysisp FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername 'usmidsql01', port '1433', database 'fanalysisp', tds_version '7.1');
|
4
telnet.md
Normal file
4
telnet.md
Normal file
@ -0,0 +1,4 @@
|
||||
to connect pass the IP `telnet 10.45.10.20`
|
||||
or just do `telnet` then do `open 10.45.10.20`
|
||||
there should be an escape character specified like ctrl+]
|
||||
then to quit telnet do `quit`
|
70
tmux.md
70
tmux.md
@ -1,25 +1,45 @@
|
||||
`Ctlr+B` activiates command entry (called the prefix)
|
||||
|
||||
panes
|
||||
----------------------------------
|
||||
% = split pane right
|
||||
" = split pane below
|
||||
<Up>/<Left> = switch panes
|
||||
z = maximize/minimize pane
|
||||
x = kill pane
|
||||
Ctrl+B+<Arrow> = resize
|
||||
|
||||
windows
|
||||
----------------------------------
|
||||
c = create new window
|
||||
w = create window selection prompt
|
||||
|
||||
sessions
|
||||
----------------------------------
|
||||
d = detach session
|
||||
tmux ls = list sesions
|
||||
tmux attach -t 0 = attach to session 0
|
||||
|
||||
|
||||
|
||||
|
||||
`Ctlr+B` activiates command entry (called the prefix)
|
||||
|
||||
panes
|
||||
----------------------------------
|
||||
prefix + % = split pane right
|
||||
prefix + " = split pane below
|
||||
prefix + <Up>/<Left> = switch panes
|
||||
prefix + z = maximize/minimize pane
|
||||
prefix + x = kill pane
|
||||
prefix + <Arrow> = resize
|
||||
|
||||
windows
|
||||
----------------------------------
|
||||
prefix + c = create new window
|
||||
prefix + w = create window selection prompt
|
||||
prefix + , = rename window
|
||||
|
||||
sessions
|
||||
----------------------------------
|
||||
prefix + d = detach session
|
||||
tmux ls = list sesions
|
||||
tmux attach -t 0 = attach to session 0
|
||||
|
||||
|
||||
colors
|
||||
----------------------------------
|
||||
setup a `.tmux.conf` file with this line `set -g default-terminal 'screen-256color'`
|
||||
point tmux to it with `tmux source-file ~/.tmux.conf`
|
||||
|
||||
|
||||
fonts
|
||||
----------------------------------
|
||||
powerline fonts
|
||||
https://github.com/vim-airline/vim-airline
|
||||
https://github.com/powerline/fonts
|
||||
sudo apt-get install fonts-powerline
|
||||
|
||||
plugins
|
||||
----------------------------------
|
||||
using tmux plugin manager to install tmux-resurrect
|
||||
plugin manager: https://github.com/tmux-plugins/tpm
|
||||
resurrect: https://github.com/tmux-plugins/tmux-resurrect
|
||||
use <prefix> + I to install plugins
|
||||
|
||||
|
||||
|
40
ubuntu/apt.md
Normal file
40
ubuntu/apt.md
Normal file
@ -0,0 +1,40 @@
|
||||
was getting this error:
|
||||
|
||||
pt@r710:~$ sudo apt update
|
||||
[sudo] password for pt:
|
||||
Hit:1 http://download.virtualbox.org/virtualbox/debian bionic InRelease
|
||||
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
|
||||
Hit:3 https://dl.yarnpkg.com/debian stable InRelease
|
||||
Hit:4 https://deb.nodesource.com/node_13.x bionic InRelease
|
||||
Hit:5 http://apt.postgresql.org/pub/repos/apt bionic-pgdg InRelease
|
||||
Hit:6 http://ppa.launchpad.net/certbot/certbot/ubuntu bionic InRelease
|
||||
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
|
||||
Hit:8 https://download.jitsi.org stable/ InRelease
|
||||
Ign:9 https://dl.packager.io/srv/deb/pghero/pghero/master/ubuntu 18.04 InRelease
|
||||
Get:10 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
|
||||
Get:11 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]
|
||||
Get:12 http://archive.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
|
||||
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,726 kB]
|
||||
Err:11 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease
|
||||
The following signatures were invalid: EXPKEYSIG 51716619E084DAB9 Michael Rutter <marutter@gmail.com>
|
||||
Ign:14 https://download.webmin.com/download/repository sarge InRelease
|
||||
Hit:15 https://download.webmin.com/download/repository sarge Release
|
||||
Get:17 https://dl.packager.io/srv/deb/pghero/pghero/master/ubuntu 18.04 Release
|
||||
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,680 kB]
|
||||
Fetched 3,663 kB in 2s (1,627 kB/s)
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
All packages are up to date.
|
||||
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease: The following signatures were invalid: EXPKEYSIG 51716619E084DAB9 Michael Rutter <marutter@gmail.com>
|
||||
W: Failed to fetch https://cloud.r-project.org/bin/linux/ubuntu/bionic-cran40/InRelease The following signatures were invalid: EXPKEYSIG 51716619E084DAB9 Michael Rutter <marutter@gmail.com>
|
||||
W: Some index files failed to download. They have been ignored, or old ones used instead.
|
||||
|
||||
ran the following command using key signature from above.
|
||||
it is supposed to import the gpg key from the target repository into the local database.
|
||||
[askubunut](https://askubuntu.com/questions/131601/gpg-error-release-the-following-signatures-were-invalid-badsig)
|
||||
|
||||
pt@r710:~$ gpg --keyserver keyserver.ubuntu.com --recv-keys 51716619E084DAB9
|
||||
gpg: key 51716619E084DAB9: public key "Michael Rutter <marutter@gmail.com>" imported
|
||||
gpg: Total number processed: 1
|
||||
gpg: imported: 1
|
@ -1,6 +1,6 @@
|
||||
for windows
|
||||
------------------
|
||||
|
||||
* `apt install cifs-utils`
|
||||
* create target folder `mkdir //mnt/onedrive`
|
||||
for windows
|
||||
------------------
|
||||
|
||||
* `apt install cifs-utils`
|
||||
* create target folder `mkdir //mnt/onedrive`
|
||||
* `sudo mount.cifs //192.168.1.89/Users/fleet/OneDrive onedrive/ -o user=fleet`
|
@ -1,10 +1,25 @@
|
||||
scanning services that are running:
|
||||
|
||||
sudo nmap -T Aggressive -A -v 127.0.0.1 -p 1-10000
|
||||
|
||||
sudo netstat --tcp --udp --listening --program
|
||||
|
||||
sudo lsof +M -i4 -i6
|
||||
|
||||
let's encrypt certbot instructions for apache:
|
||||
https://certbot.eff.org/lets-encrypt/ubuntubionic-apache
|
||||
scanning services that are running:
|
||||
|
||||
sudo nmap -T Aggressive -A -v 127.0.0.1 -p 1-10000
|
||||
|
||||
sudo netstat --tcp --udp --listening --program
|
||||
|
||||
lists programs with port numbers: `sudo netstat -tup`
|
||||
|
||||
sudo lsof +M -i4 -i6
|
||||
|
||||
# list all established connection that are not internal only"
|
||||
sudo sockstat | grep "ESTAB" | grep -v ".*192\.168\.1\.110.*192\.168\.1\.110.*" | grep -v ".*127\.0\.0\.1.*127\.0\.0\.1.*"
|
||||
|
||||
let's encrypt certbot instructions for apache:
|
||||
https://certbot.eff.org/lets-encrypt/ubuntubionic-apache
|
||||
|
||||
|
||||
ip setup:
|
||||
https://help.ubuntu.com/lts/serverguide/network-configuration.html
|
||||
|
||||
|
||||
## network interfaces
|
||||
`ip link` lists all interfaces
|
||||
multipass setup some dummy interfaces and left them there.
|
||||
to delete did `ip link delete mpqemubr0-dummy`
|
||||
|
66
ubuntu/new_data_server/install_jupyterlab.sh
Executable file
66
ubuntu/new_data_server/install_jupyterlab.sh
Executable file
@ -0,0 +1,66 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Function to check if a command is available
|
||||
command_exists() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Check if the script is running with root privileges
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "This script must be run with root privileges."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Python is installed
|
||||
if ! command_exists python3; then
|
||||
echo "Python3 is not installed. Please install Python3 first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Update system packages
|
||||
apt update
|
||||
|
||||
# Install required packages
|
||||
apt install -y python3-pip
|
||||
|
||||
# Upgrade pip
|
||||
pip3 install --upgrade pip
|
||||
|
||||
# Install JupyterLab
|
||||
pip3 install jupyterlab
|
||||
|
||||
# Create a JupyterLab configuration directory
|
||||
mkdir -p /etc/jupyterlab
|
||||
|
||||
# Generate JupyterLab configuration file
|
||||
jupyter lab --generate-config -y
|
||||
|
||||
# Modify JupyterLab configuration to listen on all available interfaces
|
||||
echo "c.ServerApp.ip = '0.0.0.0'" >> /etc/jupyterlab/jupyter_lab_config.py
|
||||
|
||||
# Create a systemd service file
|
||||
cat <<EOF > /etc/systemd/system/jupyterlab.service
|
||||
[Unit]
|
||||
Description=JupyterLab
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
PIDFile=/run/jupyterlab.pid
|
||||
ExecStart=/usr/local/bin/jupyter lab --config=/etc/jupyterlab/jupyter_lab_config.py
|
||||
User=YOUR_USERNAME # Replace this with your username
|
||||
WorkingDirectory=/path/to/jupyterlab # Replace this with the directory you want to run JupyterLab from
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Replace "YOUR_USERNAME" and "/path/to/jupyterlab" with your desired values.
|
||||
# Make sure to specify the correct path to the JupyterLab installation directory.
|
||||
|
||||
# Enable and start the JupyterLab service
|
||||
systemctl enable jupyterlab
|
||||
systemctl start jupyterlab
|
||||
|
||||
echo "JupyterLab has been installed and set up as a systemd service."
|
||||
echo "You can access it from other computers on the network by opening your browser and navigating to http://YOUR_SERVER_IP_OR_DOMAIN:8888"
|
||||
|
36
ubuntu/new_data_server/new_server.md
Normal file
36
ubuntu/new_data_server/new_server.md
Normal file
@ -0,0 +1,36 @@
|
||||
# Setup a new linux server
|
||||
|
||||
## User and Dot Files
|
||||
### SSH Keys
|
||||
`ssh-keygen`
|
||||
`add to gitea and github`
|
||||
`git clone https://gitea.hptrow.me/pt/dot_config`
|
||||
`cd dot_config`
|
||||
`./setup.sh`
|
||||
`cp .bash_local_example .bash_local`
|
||||
edit PG and DB2PW
|
||||
install nvim
|
||||
install nvchad
|
||||
|
||||
|
||||
## Firewall
|
||||
`sudo ufw enable`
|
||||
`sudo ufw limit 22`
|
||||
`sudo allow 5432`
|
||||
`sudo allow 8083`
|
||||
|
||||
## Postgres
|
||||
### Copy Backups
|
||||
`rsync -azv -e ssh ptrowbridge@usmidsap01://mnt/backup //mnt/backup`
|
||||
### Config Files
|
||||
`git clone https://gitea.hptrow.me`
|
||||
install python3
|
||||
setup postgres config files
|
||||
setup java jdk
|
||||
clone and build jrunner
|
||||
clone jrunner_conf
|
||||
point dbeaver jobs to new server
|
||||
point powerbi to new server
|
||||
point price list functions to new server
|
||||
coordinate with Dwight for cash if applicable
|
||||
setup jupyterlab
|
@ -1,43 +0,0 @@
|
||||
apt update
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
//sometimes network-manager service is not running after update and cannot resolve addresses
|
||||
sudo service network-manager start
|
||||
sudo ln -sf /run/resolvconf/resolv.conf /etc/resolv.conf
|
||||
```
|
||||
also had to reference [this article](https://askubuntu.com/questions/368435/how-do-i-fix-dns-resolving-which-doesnt-work-after-upgrading-to-ubuntu-13-10-s)
|
||||
|
||||
version control /etc
|
||||
```
|
||||
cd //etc
|
||||
sudo git init
|
||||
sudo git add .
|
||||
sudo git commit -m "initial setup"
|
||||
```
|
||||
|
||||
pspg pager
|
||||
```
|
||||
sudp apt-get install pspg
|
||||
```
|
||||
|
||||
postgres
|
||||
```
|
||||
sudo vim /etc/apt/sources.list.d/pgdg.list
|
||||
deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main
|
||||
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
|
||||
sudo apt-get update
|
||||
sudo apt-get install postgresql-11
|
||||
```
|
||||
|
||||
vundle
|
||||
```
|
||||
git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
||||
```
|
||||
|
||||
dotfiles (depends on vundle currently)
|
||||
```
|
||||
git clone "https://fleetside@bitbucket.com/fleetside/dotfiles.git"
|
||||
cp -R ~/dotfiles/. ~/
|
||||
sudo rm -r dotfiles/
|
||||
```
|
@ -1,21 +0,0 @@
|
||||
`//etc/systemd/system/filename.service`
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=forecast_api
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/node //opt/forecast_api/index.js
|
||||
Restart=always
|
||||
User=fc_api
|
||||
Environemnt=NODE_ENV=production
|
||||
WorkingDirectory=//opt/forecast_api
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
`systemctl enable forecast.api`
|
||||
|
||||
`systemctl start forecast_api.service`
|
43
ubuntu/ufw.md
Normal file
43
ubuntu/ufw.md
Normal file
@ -0,0 +1,43 @@
|
||||
if you dont specify a protocol it allows either tcp/udp
|
||||
|
||||
**ports**
|
||||
```
|
||||
sudo ufw allow 22
|
||||
sudo ufw allow 22/tcp
|
||||
```
|
||||
|
||||
**ranges**
|
||||
```
|
||||
sudo ufw allow 6000:6007/tcp
|
||||
sudo ufw allow 6000:6007/udp
|
||||
```
|
||||
|
||||
**specific ip**
|
||||
```
|
||||
sudo ufw allow from 203.0.113.4
|
||||
sudo ufw allow from 203.0.113.4 to any port 22
|
||||
```
|
||||
|
||||
enable firewall `suod ufw enable`
|
||||
|
||||
|
||||
## inquirey
|
||||
`sudo ufw status numbered`
|
||||
pt@r710:~$ sudo ufw status numbered
|
||||
Status: active
|
||||
|
||||
To Action From
|
||||
-- ------ ----
|
||||
[ 1] 22/tcp ALLOW IN Anywhere
|
||||
[ 2] 5432 ALLOW IN Anywhere
|
||||
[ 3] 5440 ALLOW IN Anywhere
|
||||
[ 4] 10000 ALLOW IN Anywhere
|
||||
[ 5] 443/tcp ALLOW IN Anywhere
|
||||
[ 6] 5433/tcp ALLOW IN Anywhere
|
||||
[ 7] 22/tcp (v6) ALLOW IN Anywhere (v6)
|
||||
[ 8] 5432 (v6) ALLOW IN Anywhere (v6)
|
||||
[ 9] 5440 (v6) ALLOW IN Anywhere (v6)
|
||||
[10] 10000 (v6) ALLOW IN Anywhere (v6)
|
||||
[11] 443/tcp (v6) ALLOW IN Anywhere (v6)
|
||||
[12] 5433/tcp (v6) ALLOW IN Anywhere (v6)
|
||||
|
@ -18,3 +18,6 @@ usermod -a -G sudo fc_api
|
||||
|
||||
chown user_name directory/
|
||||
chgrp user_name directory/
|
||||
|
||||
attempted to change password with "Read only file system" as an error, as well a pull a git repository with the same error.
|
||||
used fsck /dev/sda5 to fix a whole list of issues, problem resolved
|
||||
|
17
vagrant.md
Normal file
17
vagrant.md
Normal file
@ -0,0 +1,17 @@
|
||||
vagrant has releases at https://releases.hashicorp.com/vagrant/2.2.7/
|
||||
|
||||
curl -O https://releases.hashicorp.com/vagrant/2.2.7/vagrant_2.2.7_x86_64.deb
|
||||
|
||||
then `sudo apt install ./file.deb`
|
||||
|
||||
although if you go to the downloads page, it looks like it gives you a zip of the binary if you want to drop that somewhere.
|
||||
|
||||
set the vagrant file to uncomment the public network option and specify some additional info
|
||||
|
||||
vagrant init ubuntu/bionic64
|
||||
|
||||
`config.vm.network "public_network", bridge: "eno1", ip: "192.168.1.115"`
|
||||
|
||||
vagrant up
|
||||
vagrant ssh
|
||||
vagrant halt
|
163
vim.md
163
vim.md
@ -1,79 +1,84 @@
|
||||
:Ex - use built in explorer to eplore at location
|
||||
:colorscheme with autocomplete
|
||||
:vs veritcale split
|
||||
:sh horizontal split
|
||||
:edit open a file
|
||||
:ls list buffers
|
||||
:b picka buffer
|
||||
|
||||
|
||||
plugins
|
||||
------------------------
|
||||
Vundler
|
||||
* install per below
|
||||
* add to .vimrc `Plugin 'gmarik/Vundle.vim'` and run :PluginInstall
|
||||
|
||||
NERDtree
|
||||
* add to .vimrc `Plugin 'scrooloose/nerdtree'` and run :PluginInstall
|
||||
* call with :NERDtree
|
||||
|
||||
fugitive - git command in a split
|
||||
* add to .vimrc `Plugin 'tpope/vim-fugitive'` and run :PluginInstall
|
||||
* :Gdiff, :Gstatus etc.
|
||||
|
||||
powerline
|
||||
* vim status and git status info
|
||||
* add to .vimrc `Plugin 'Lokaltog/powerline', {'rtp': 'powerline/bindings/vim/'}` and run :PluginInstall
|
||||
|
||||
|
||||
Vundler
|
||||
---------------
|
||||
git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
||||
|
||||
add the following to ~/.vimrc:
|
||||
```
|
||||
set nocompatible " be iMproved, required
|
||||
filetype off " required
|
||||
|
||||
" set the runtime path to include Vundle and initialize
|
||||
set rtp+=~/.vim/bundle/Vundle.vim
|
||||
call vundle#begin()
|
||||
" alternatively, pass a path where Vundle should install plugins
|
||||
"call vundle#begin('~/some/path/here')
|
||||
|
||||
" let Vundle manage Vundle, required
|
||||
Plugin 'VundleVim/Vundle.vim'
|
||||
|
||||
" The following are examples of different formats supported.
|
||||
" Keep Plugin commands between vundle#begin/end.
|
||||
" plugin on GitHub repo
|
||||
Plugin 'tpope/vim-fugitive'
|
||||
" plugin from http://vim-scripts.org/vim/scripts.html
|
||||
" Plugin 'L9'
|
||||
" Git plugin not hosted on GitHub
|
||||
Plugin 'git://git.wincent.com/command-t.git'
|
||||
" git repos on your local machine (i.e. when working on your own plugin)
|
||||
Plugin 'file:///home/gmarik/path/to/plugin'
|
||||
" The sparkup vim script is in a subdirectory of this repo called vim.
|
||||
" Pass the path to set the runtimepath properly.
|
||||
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
|
||||
" Install L9 and avoid a Naming conflict if you've already installed a
|
||||
" different version somewhere else.
|
||||
" Plugin 'ascenator/L9', {'name': 'newL9'}
|
||||
|
||||
" All of your Plugins must be added before the following line
|
||||
call vundle#end() " required
|
||||
filetype plugin indent on " required
|
||||
" To ignore plugin indent changes, instead use:
|
||||
"filetype plugin on
|
||||
"
|
||||
" Brief help
|
||||
" :PluginList - lists configured plugins
|
||||
" :PluginInstall - installs plugins; append `!` to update or just :PluginUpdate
|
||||
" :PluginSearch foo - searches for foo; append `!` to refresh local cache
|
||||
" :PluginClean - confirms removal of unused plugins; append `!` to auto-approve removal
|
||||
"
|
||||
" see :h vundle for more details or wiki for FAQ
|
||||
" Put your non-Plugin stuff after this line
|
||||
```
|
||||
|
||||
- :Ex - use built in explorer to eplore at location
|
||||
:colorscheme with autocomplete
|
||||
:vs veritcale split
|
||||
:sh horizontal split
|
||||
:edit open a file
|
||||
:ls list buffers
|
||||
:b picka buffer
|
||||
-
|
||||
- plugins
|
||||
------------------------
|
||||
Vundler
|
||||
* install per below
|
||||
* add to .vimrc `Plugin 'gmarik/Vundle.vim'` and run :PluginInstall
|
||||
|
||||
NERDtree
|
||||
* add to .vimrc `Plugin 'scrooloose/nerdtree'` and run :PluginInstall
|
||||
* call with :NERDtree
|
||||
|
||||
fugitive - git command in a split
|
||||
* add to .vimrc `Plugin 'tpope/vim-fugitive'` and run :PluginInstall
|
||||
* :Gdiff, :Gstatus etc.
|
||||
|
||||
powerline
|
||||
* vim status and git status info
|
||||
* add to .vimrc `Plugin 'Lokaltog/powerline', {'rtp': 'powerline/bindings/vim/'}` and run :PluginInstall
|
||||
|
||||
|
||||
Vundler
|
||||
---------------
|
||||
git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
||||
|
||||
add the following to ~/.vimrc:
|
||||
```
|
||||
set nocompatible " be iMproved, required
|
||||
filetype off " required
|
||||
|
||||
" set the runtime path to include Vundle and initialize
|
||||
set rtp+=~/.vim/bundle/Vundle.vim
|
||||
call vundle#begin()
|
||||
" alternatively, pass a path where Vundle should install plugins
|
||||
"call vundle#begin('~/some/path/here')
|
||||
|
||||
" let Vundle manage Vundle, required
|
||||
Plugin 'VundleVim/Vundle.vim'
|
||||
|
||||
" The following are examples of different formats supported.
|
||||
" Keep Plugin commands between vundle#begin/end.
|
||||
" plugin on GitHub repo
|
||||
Plugin 'tpope/vim-fugitive'
|
||||
" plugin from http://vim-scripts.org/vim/scripts.html
|
||||
" Plugin 'L9'
|
||||
" Git plugin not hosted on GitHub
|
||||
Plugin 'git://git.wincent.com/command-t.git'
|
||||
" git repos on your local machine (i.e. when working on your own plugin)
|
||||
Plugin 'file:///home/gmarik/path/to/plugin'
|
||||
" The sparkup vim script is in a subdirectory of this repo called vim.
|
||||
" Pass the path to set the runtimepath properly.
|
||||
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
|
||||
" Install L9 and avoid a Naming conflict if you've already installed a
|
||||
" different version somewhere else.
|
||||
" Plugin 'ascenator/L9', {'name': 'newL9'}
|
||||
|
||||
" All of your Plugins must be added before the following line
|
||||
call vundle#end() " required
|
||||
filetype plugin indent on " required
|
||||
" To ignore plugin indent changes, instead use:
|
||||
"filetype plugin on
|
||||
"
|
||||
" Brief help
|
||||
" :PluginList - lists configured plugins
|
||||
" :PluginInstall - installs plugins; append `!` to update or just :PluginUpdate
|
||||
" :PluginSearch foo - searches for foo; append `!` to refresh local cache
|
||||
" :PluginClean - confirms removal of unused plugins; append `!` to auto-approve removal
|
||||
"
|
||||
" see :h vundle for more details or wiki for FAQ
|
||||
" Put your non-Plugin stuff after this line
|
||||
```
|
||||
|
||||
after a large apt update, something got messed up with characters and colors, simply doing `syntax on` fixed the problem
|
||||
|
||||
when using NERDtree:
|
||||
* open `o`
|
||||
* open with a horizontal split `i`
|
||||
* open with a vertical split `s`
|
16
virtualbox.md
Normal file
16
virtualbox.md
Normal file
@ -0,0 +1,16 @@
|
||||
install from Oracles repo
|
||||
|
||||
https://itsfoss.com/install-virtualbox-ubuntu/
|
||||
|
||||
|
||||
add key for repo
|
||||
|
||||
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
|
||||
|
||||
add virtualbox to list of repositories
|
||||
|
||||
sudo add-apt-repository "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib"
|
||||
|
||||
`apt-get install virtualbox-6.1`
|
||||
|
||||
vboxmanage is the cli program
|
2
webmin.md
Normal file
2
webmin.md
Normal file
@ -0,0 +1,2 @@
|
||||
install on Ubunut
|
||||
https://doxfer.webmin.com/Webmin/Installation#apt_.28Debian.2FUbuntu.2FMint.29
|
14
wekan.md
Normal file
14
wekan.md
Normal file
@ -0,0 +1,14 @@
|
||||
https://github.com/wekan/wekan-snap/wiki/Install
|
||||
|
||||
|
||||
`snap set wekan root-url='https://example.com/something'`
|
||||
|
||||
`snap set wekan port='3001'`
|
||||
|
||||
caddy files exist but not understood: //var/snap/wekan/common
|
||||
|
||||
### Mail Setup
|
||||
https://github.com/wekan/wekan/wiki/Troubleshooting-Mail
|
||||
|
||||
sudo snap set wekan mail-url='smtp://paul%40hptrow.me:password@mail.gandi.net:587/?ignoreTLS=true&tls={rejectUnauthorized:false}&secure=true'
|
||||
sudo snap set wekan mail-from='Wekan Team Boards <paul@hptrow.me>'
|
11
windows.md
Normal file
11
windows.md
Normal file
@ -0,0 +1,11 @@
|
||||
Windows 11 quick settings and notification area not working
|
||||
https://support.microsoft.com/en-us/windows/how-to-open-notification-center-and-quick-settings-f8dc196e-82db-5d67-f55e-ba5586fbb038#WindowsVersion=Windows_11
|
||||
|
||||
Windows 11 keyboard repeat rate is slow
|
||||
Control Panel -> Search "Keayboard" -> Click on "change cursor blink rate" -> repeat speed setting
|
||||
|
||||
Windows 11 taskbar size
|
||||
`regedit taskbarsi`
|
||||
|
||||
Windows 11 Action Center disabled
|
||||
`regedit Software -> Policies -> Microsoft -> Windows -> Explorer -> DisableNotificationCenter = 0`
|
Loading…
Reference in New Issue
Block a user