目录
Neo4j ETL
Overview
Features
License
Issues & Feedback & Contributions
Download & Run
Examples of command usage:
Neo4j-Desktop
JDBC Drivers
Introduction
Architecture Diagram
What it is
Plans for the Future
Who is it for
Open Questions
neo4j-etl Command Line Tool
Available commands
'generate-metadata-mapping' command
'export' command
Parameters Usage
Example Session: Basic
Generate Metadata Mapping
Offline Bulk Import via neo4j-import tool for initial load (Neo4j database must be empty)
Online Batch Import via java-bolt-driver for incremental load (neo4j can be already populated)
Example Session: Docker + Northwind
MySQL
PostgreSQL
Oracle
Microsoft SQL
How to import World Wide Importers database into a MS SQL server Docker instance
Capabilities
Inferring Schema with Mapping Rules (generate-metadata-mapping)
Edit Mapping via UI
Exporting Data (export)
Neo4j ETL
Overview
Features
Neo4j-ETL UI in Neo4j Desktop
Manage multiple RDBMS connections
automatically extract database metadata from relational database
derive graph model
visually edit labels, relationship-types, property-names and types
visualize current model as a graph
persist mapping as json
retrieve relevant CSV data from relational databases
run import via neo4j-import, bolt-connector, cypher-shell, neo4j-shell
bundles MySQL, PostgreSQL, allows custom JDBC driver with Neo4j Enterprise
License
This tool is licensed under the NEO4J PRE-RELEASE LICENSE AGREEMENT.
Issues & Feedback & Contributions
Download & Run
Download & unzip the latest neo4j-etl.zip.
Examples of command usage:
Minimal command line
./bin/neo4j-etl export \
--rdbms:url --rdbms:user --rdbms:password \
--destination $NEO4J_HOME/data/databases/graph.db/ --import-tool $NEO4J_HOME/bin \
--csv-directory $NEO4J_HOME/import
Full set of command line options
./bin/neo4j-etl export \
--rdbms:url --rdbms:user --rdbms:password --rdbms:schema \
--using { bulk:neo4j-import | cypher:neo4j-shell | cypher:shell | cypher:direct } \
--neo4j:url --neo4j:user --neo4j:password \
--destination $NEO4J_HOME/data/databases/graph.db/ --import-tool $NEO4J_HOME/bin \
--csv-directory $NEO4J_HOME/import --options-file import-tool-options.json --force --debug
For detailed usage see also the: tool documentation.
Neo4j-Desktop
You can add Neo4j ETL to Neo4j Desktop by adding the appropriate application key. Please ask your Neo4j contact or send an email to [email protected]
Then the next time you start Neo4j Desktop you’ll see Neo4j ETL as a UI to be used interactively.
Configure Driver
Load Mapping
Edit Mapping
Import Data
JDBC Drivers
The drivers for MySQL and PostgreSQL are bundled with the Neo4j-ETL tool.
To use other JDBC drivers use these download links and JDBC URLs. Provide the JDBC driver jar-file to the command line tool or Neo4j-ETL application. And use the JDBC-URL with the --rdbms:url
parameter or in the JDBC-URL input field.
Database
JDBC-URL
Driver Source
Oracle
jdbc:oracle:thin:/@:/
Oracle JDBC Driver
MS SQLServer
jdbc:sqlserver://;servername=;databaseName=;user=;password=
SQLServer Driver
IBM DB2
jdbc:db2://:/:user=;password=;
DB2 Driver
Derby
jdbc:derby:derbyDB
Included since JDK6
Cassandra
jdbc:cassandra://:/
Cassandra JDBC Wrapper
SAP Hana
jdbc:sap://:/?user=&password=
SAP Hana ngdbc Driver
MySQL
jdbc:mysql://:/?user=&password=
MySQL Driver
PostgreSQL
jdbc:postgresql:///?user=&password=
PostgreSQL JDBC Driver
Introduction
The Neo4j ETL, especially the neo4j-etl
command-line tool, can be used to import well modeled (i.e. normalized) relational data into Neo4j. It applies some simple rules for transforming the relational model.
The process as outlined below:
Read database metadata and generate mapping.json
Optionally edit mapping.json with the neo4j-etl-ui
in Neo4j Desktop
Export relational data to CSV
Generate Mapping Headers
Import into Neo4j using
the neo4j-import
tool for initial offline bulk load
the neo4j-shell
tool for incremental offline bulk load
the cypher-shell
tool for incremental online single-transaction load
the java bolt driver
for incremental online batch load
Architecture Diagram
What it is
Command-Line tools
Java API/library
Infer Schema and save in mapping file
Filter and merge strategies
Read mapping file to export data from other databases then
Import into Neo via different tools (neo4j-import
, neo4j-shell
, cypher-shell
, java bolt driver
)
Work in offline and online mode
Import in both an empty (initial load) and not-empty graph (incremental)
Build indexes and constraints
Support on Unix-like and Microsoft Operating Systems
Support for most popular relational databases like MySQL, PostgreSQL, Oracle and Microsoft SQL
Support user specified JDBC drivers
UI tool to visually modify mappings
Plans for the Future
Custom Mapping Rules + Transformations for names, data, links
Exemplary integration into a 3rd party ETL pipeline
More data types (binary, datetime, geo)
Who is it for
Developer learning to work with Neo4j for initial data import
Partners providing data integration with Neo4j
Enterprise developers building applications based on well modeled relational data
Open Questions
Date and binary datatypes
Security (secure connections, handling of passwords, encrypting data)
neo4j-etl Command Line Tool
This is the command-line tool you use to retrieve and map the metadata from your relational database and drive the export from the relational and import into Neo4j database.
With the graphical user interface you can preview the resulting graph data model and eventually adapt it by changing labels, property names, relationship-types and property types.
It supports all relational databases with a JDBC driver, like MySQL , PostgreSQL , Oracle and Microsoft SQL .
You can get the latest version of the import tool from GitHub.
Once downloaded and uncompressed the operating system specific zip
/ tar.gz
, you also need download the proper JDBC Driver and add it to the lib
folder.
You can follow the proper link in the below table in order to download the proper driver jar
Vendor
JDBC Driver URL
MySql
http://dev.mysql.com/downloads/connector/j/
PostgreSql
https://jdbc.postgresql.org/download.html
Oracle
http://www.oracle.com/technetwork/database/features/jdbc/default-2280470.html
Microsoft SQL Server
https://www.microsoft.com/en-us/download/details.aspx?id=55539
NOTE
For very large databases make sure to have enough disk-space for the CSV export and the Neo4j datastore and enough RAM and CPUs to finish the import quickly.
Available commands
NAME
neo4j-etl generate-metadata-mapping - Create RDBMS to Neo4j metadata
mapping Json.
SYNOPSIS
neo4j-etl generate-metadata-mapping
[ {--columns | --cols} ... ]
[ --config-file ]
[ {-d | --database} ] [ --debug ]
[ --delimiter ] [ {--driver | --jars} <--driver --driver >... ]
[ {--exclusion-mode | --exc} ]
[ {--exclusion-mode-column-type | --exctype} ]
[ {--exclusion-mode-columns | --excc} ]
[ {--exclusion-mode-tables | --exct} ]
[ --options-file ] [ --output-mapping-file ]
[ {-p | --port} ] [ --quote ]
[ {--rdbms:fetch-size | --fs} ]
[ {--rdbms:password | --password} ]
[ {--rdbms:schema | -s | --schema} ]
[ {--rdbms:url | --url} ]
[ {--rdbms:user | -u | --user} ]
[ {--relationship-name | --rel-name} ]
[ --schemas ... ] [ {--tables | --tabs} ... ]
[ --tiny-int ] [ --types ... ] [--] [ ... ]
OPTIONS
--columns , --cols
Lists all columns to include/exclude by name or pattern
Use '-r' to filter by regex, ex. '-r .*\.orders\..*_id' or
'northwind\.orders\..*_id' ,
'-g' for grep syntax, ex. '-g .*\.orders\..*_id' or
'northwind\.orders\..*_id' ,
or '-l' to list all columns names ex. '-l
northwind.customers.id,northwind.purchase.id,northwind.orders.id'
--config-file
Specify the path to a file containing the configuration for the
selected command
-d , --database
RDBMS database.
This option is required if any of the following options are
specified: host
--debug
Print detailed diagnostic output.
--delimiter
Delimiter to separate fields in CSV.
--driver <--driver --driver >, --jars <--driver --driver >
List of additional drivers as a list
--exclusion-mode , --exc
Specifies how to handle table exclusion. Options are mutually
exclusive.
exclude: Excludes specified tables from the process. All other
tables will be included.
include: Includes specified tables only. All other tables will be
excluded.
none: All tables are included in the process.
--exclusion-mode-column-type , --exctype
Specifies how to handle column type exclusion. Options are mutually
exclusive.
exclude: Excludes specified columns types from the process. All
other columns types will be included.
include: Includes specified columns types only. All other columns
types will be excluded.
none: All columns types are included in the process.
--exclusion-mode-columns , --excc
Specifies how to handle column exclusion. Options are mutually
exclusive.
exclude: Excludes specified columns from the process. All other
columns will be included.
include: Includes specified columns only. All other columns will be
excluded.
none: All columns are included in the process.
--exclusion-mode-tables , --exct
Specifies how to handle table exclusion. Options are mutually
exclusive.
exclude: Excludes specified tables from the process. All other
tables will be included.
include: Includes specified tables only. All other tables will be
excluded.
none: All tables are included in the process.
--options-file
Path to file containing Neo4j import tool options.
--output-mapping-file
Path to the output metadata mapping file.
-p , --port
Port number to use for connection to RDBMS.
--quote
Character to treat as quotation character for values in CSV data.
--rdbms:fetch-size , --fs
RDBMS Fetch size
--rdbms:password , --password
Password for login to RDBMS.
This option is required if any of the following options are
specified: --rdbms:url, --url
--rdbms:schema , -s , --schema
RDBMS schema.
--rdbms:url , --url
Url to use for connection to RDBMS.
--rdbms:user , -u , --user
User for login to RDBMS.
This option is required if any of the following options are
specified: --rdbms:url, --url
--relationship-name , --rel-name
Specifies whether to get the name for relationships from table names
or column names.
--schemas
Lists all schemas to include by name or pattern.
Use '-r' to filter by regex, ex. '-r .*\.north.*',
'-g' for grep syntax, ex. '-g .*\.north.*' ,
or '-l' to list all schemas names ex. '-l northwind,exc'
--tables , --tabs
Lists all tables to include/exclude by name or pattern.
Use '-r' to filter by regex, ex. '-r .*\.purchase.*' or
'northwind.purchase.*' ,
'-g' for grep syntax, ex. '-g .*\.purchase.*' or
'northwind.purchase.*' ,
or '-l' to list all tables names ex. '-l
customers,purchase,orders'
--tiny-int
Specifies whether to convert TinyInt to byte or boolean
--types
Lists all column types to include/exclude by name separated by
commas. Valid values:
unknown,
binary,
bit,
character,
id,
integer,
real,
reference,
temporal,
url,
xml,
large_object,
object;
--
This option can be used to separate command-line options from the
list of arguments (useful when arguments might be mistaken for
command-line options)
Tables to be excluded/included
'export' command
NAME
neo4j-etl export - Export from RDBMS and import into NEO4J via CSV
files.
SYNOPSIS
neo4j-etl export [ {--columns | --cols} ... ]
[ --config-file ]
[ --csv-directory ]
[ {-d | --database} ] [ --debug ]
[ --delimiter ] [ --destination ] [ {--driver | --jars} <--driver --driver >... ]
[ {--exclusion-mode | --exc} ]
[ {--exclusion-mode-column-type | --exctype} ]
[ {--exclusion-mode-columns | --excc} ]
[ {--exclusion-mode-tables | --exct} ]
[ --force ] [ --import-tool ]
[ --mapping-file ] [ {--neo4j:password | --graph:password | --graph:neo4j:password} ]
[ {--neo4j:url | --graph:url | --graph:neo4j:url} ]
[ {--neo4j:user | --graph:user | --graph:neo4j:user} ]
[ --options-file ] [ --output-mapping-file ]
[ {-p | --port} ] [ --quote ]
[ {--rdbms:fetch-size | --fs} ]
[ {--rdbms:password | --password} ]
[ {--rdbms:schema | -s | --schema} ]
[ {--rdbms:url | --url} ]
[ {--rdbms:user | -u | --user} ]
[ {--relationship-name | --rel-name} ]
[ --schemas ... ] [ {--tables | --tabs} ... ]
[ --tiny-int ] [ --types ... ]
[ --using ] [--] [ ... ]
OPTIONS
--columns , --cols
Lists all columns to include/exclude by name or pattern
Use '-r' to filter by regex, ex. '-r .*\.orders\..*_id' or
'northwind\.orders\..*_id' ,
'-g' for grep syntax, ex. '-g .*\.orders\..*_id' or
'northwind\.orders\..*_id' ,
or '-l' to list all columns names ex. '-l
northwind.customers.id,northwind.purchase.id,northwind.orders.id'
--config-file
Specify the path to a file containing the configuration for the
selected command
--csv-directory
Path to directory for intermediate CSV files.
-d , --database
RDBMS database.
This option is required if any of the following options are
specified: host
--debug
Print detailed diagnostic output.
--delimiter
Delimiter to separate fields in CSV.
--destination
Path to destination store directory.
--driver <--driver --driver >, --jars <--driver --driver >
List of additional drivers as a list
--exclusion-mode , --exc
Specifies how to handle table exclusion. Options are mutually
exclusive.
exclude: Excludes specified tables from the process. All other
tables will be included.
include: Includes specified tables only. All other tables will be
excluded.
none: All tables are included in the process.
--exclusion-mode-column-type , --exctype
Specifies how to handle column type exclusion. Options are mutually
exclusive.
exclude: Excludes specified columns types from the process. All
other columns types will be included.
include: Includes specified columns types only. All other columns
types will be excluded.
none: All columns types are included in the process.
--exclusion-mode-columns , --excc
Specifies how to handle column exclusion. Options are mutually
exclusive.
exclude: Excludes specified columns from the process. All other
columns will be included.
include: Includes specified columns only. All other columns will be
excluded.
none: All columns are included in the process.
--exclusion-mode-tables , --exct
Specifies how to handle table exclusion. Options are mutually
exclusive.
exclude: Excludes specified tables from the process. All other
tables will be included.
include: Includes specified tables only. All other tables will be
excluded.
none: All tables are included in the process.
--force
Force delete destination store directory if it already exists.
--import-tool
Path to directory containing Neo4j import tool.
--mapping-file
Path to an existing metadata mapping file. The name 'stdin' will
cause the CSV resources definitions to be read from standard input.
--neo4j:password , --graph:password ,
--graph:neo4j:password
Password for login to Neo4j.
--neo4j:url , --graph:url , --graph:neo4j:url
Url to use for connection to Neo4j.
--neo4j:user , --graph:user , --graph:neo4j:user
User for login to Neo4j.
--options-file
Path to file containing Neo4j import tool options.
--output-mapping-file
Path to the output metadata mapping file.
-p , --port
Port number to use for connection to RDBMS.
--quote
Character to treat as quotation character for values in CSV data.
--rdbms:fetch-size , --fs
RDBMS Fetch size
--rdbms:password , --password
Password for login to RDBMS.
This option is required if any of the following options are
specified: --rdbms:url, --url
--rdbms:schema , -s , --schema
RDBMS schema.
--rdbms:url , --url
Url to use for connection to RDBMS.
--rdbms:user , -u , --user
User for login to RDBMS.
This option is required if any of the following options are
specified: --rdbms:url, --url
--relationship-name , --rel-name
Specifies whether to get the name for relationships from table names
or column names.
--schemas
Lists all schemas to include by name or pattern.
Use '-r' to filter by regex, ex. '-r .*\.north.*',
'-g' for grep syntax, ex. '-g .*\.north.*' ,
or '-l' to list all schemas names ex. '-l northwind,exc'
--tables , --tabs
Lists all tables to include/exclude by name or pattern.
Use '-r' to filter by regex, ex. '-r .*\.purchase.*' or
'northwind.purchase.*' ,
'-g' for grep syntax, ex. '-g .*\.purchase.*' or
'northwind.purchase.*' ,
or '-l' to list all tables names ex. '-l
customers,purchase,orders'
--tiny-int
Specifies whether to convert TinyInt to byte or boolean
--types
Lists all column types to include/exclude by name separated by
commas. Valid values:
unknown,
binary,
bit,
character,
id,
integer,
real,
reference,
temporal,
url,
xml,
large_object,
object;
--using
Import tool that will be used to load data into neo4j.
--
This option can be used to separate command-line options from the
list of arguments (useful when arguments might be mistaken for
command-line options)
Tables to be excluded/included
Parameters Usage
There are two ways for write Etl parameters:
1) write parameters in command line:
$NEO4J_HOME/bin/neo4j-etl export|generate-metadata-mapping
--rdbms:url jdbc:oracle:thin:@localhost:49161:XE
--rdbms:user northwind --rdbms :password northwind
--rdbms:schema northwind
--using bulk:neo4j-import
--import-tool $NEO4J_HOME/bin
--csv-directory /tmp/northwind
--options-file /tmp/northwind/options.json
--quote '"' --force
...
2) use a config file:
$NEO4J_HOME/bin/neo4j-etl export|generate-metadata-mapping \
--config-file
Above there is an Example of config file.
#EXAMPLE - ETL CONFIG FILE
#RDBMS
rdbms-url=url
rdbms-schema=schema
rdbms-password=neo4j
rdbms-user=neo4j
rdbms-fetch-size=10000
#NEO4J
using=cypher:direct
neo4j-url=bolt://127.0.0.1:7687
neo4j-user=neo4j
neo4j-password=neo4j
#RULES
exclusion-mode-tables=INCLUDE
tables=-l table1,table2,...
exclusion-mode-columns=INCLUDE
columns=-l column1,column2,...
exclusion-mode-column-types=EXCLUDE
column-types=type1,type2,...
#MISC
output-mapping-file=path_to_output_mapping_file
import-tool=path_to_import_tool
csv-directory=path_to_directory
mapping-file=path_to_file
debug=false
Example Session: Basic
export NEO4J_HOME=/path/to/neo4j-enterprise-3.4.0
mkdir -p /tmp/northwind
$NEO4J_HOME/bin/neo4j-etl generate-metadata-mapping \
--rdbms:url jdbc:oracle:thin:@localhost:49161:XE \
--rdbms:user northwind --rdbms:password northwind \
--rdbms:schema northwind --output-mapping-file /tmp/northwind/mapping.json
echo '{ "multiline-fields" : "true" }' > /tmp/northwind/options.json
$NEO4J_HOME/bin/neo4j-etl export \
--rdbms:url jdbc:oracle:thin:@localhost:49161:XE \
--rdbms:user northwind --rdbms :password northwind \
--rdbms:schema northwind \
--using bulk:neo4j-import \
--import-tool $NEO4J_HOME/bin \
--csv-directory /tmp/northwind \
--options-file /tmp/northwind/options.json \
--quote '"' --force
Test Offline Bulk Import result
$NEO4J_HOME/bin/neo4j-shell -path $NEO4J_HOME/data/databases/graph.db/ -c 'MATCH (n) RETURN labels(n), count(*);'
+--------------------------+
| labels(n) | count(*) |
+--------------------------+
| ["Shipper"] | 3 |
| ["Employee"] | 9 |
| ["Region"] | 4 |
| ["Customer"] | 93 |
| ["Territory"] | 53 |
| ["Product"] | 77 |
| ["Supplier"] | 29 |
| ["Order"] | 830 |
| ["Category"] | 8 |
+--------------------------+
9 rows
Online Batch Import via java-bolt-driver
for incremental load (neo4j can be already populated)
echo '{ "multiline-fields" : "true" }' > /tmp/northwind/options.json
$NEO4J_HOME/bin/neo4j-etl export \
--rdbms:url jdbc:oracle:thin:@localhost:49161:XE \
--rdbms:user northwind --rdbms:password northwind \
--rdbms:schema northwind \
--using cypher:direct \
--neo4j:url bolt://localhost:7687 \
--neo4j:user neo4j --neo4j:password neo4j \
--import-tool $NEO4J_HOME/bin \
--csv-directory /tmp/northwind \
--options-file /tmp/northwind/options.json \
--quote '"' --force
Test Online Batch Incremental Import result
$NEO4J_HOME/bin/cypher-shell -a bolt://localhost:7687 -u neo4j -p neo4j 'MATCH (n) RETURN labels(n), count(*);'
+--------------------------+
| labels(n) | count(*) |
+--------------------------+
| ["Shipper"] | 3 |
| ["Employee"] | 9 |
| ["Region"] | 4 |
| ["Customer"] | 93 |
| ["Territory"] | 53 |
| ["Product"] | 77 |
| ["Supplier"] | 29 |
| ["Order"] | 830 |
| ["Category"] | 8 |
+--------------------------+
9 rows
Example Session: Docker + Northwind
This example session is based on the Northwind example dataset.
DDL scripts are available here:
MySQL
PostgreSQL
Oracle
Microsoft SQL
MySQL
Download, start and configure the docker container with MySQL:
docker pull mysql
docker run --name neo4j-etl-mysql -e MYSQL_ROOT_PASSWORD=admin -e MYSQL_DATABASE=northwind -e MYSQL_USER=neo4j -e MYSQL_PASSWORD=neo4j -d -p 3306:3306 mysql:latest
docker exec -it neo4j-etl-mysql bash
root@eb6f279fdb88:/# mysql -u root -p
Enter password: admin
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.18 MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> grant all privileges on *.* to 'neo4j'@'%' with grant option;
Query OK, 0 rows affected (0.00 sec)
mysql> quit;
Bye
root@bf99fbc0d31c:/# exit
exit
Load the database via the following sql script: https://raw.githubusercontent.com/neo4j-contrib/neo4j-etl/master/neo4j-etl-it/src/main/resources/scripts/mysql/northwind.sql
export NEO4J_HOME=/path/to/neo4j-enterprise-3.4.0
mkdir -p /tmp/northwind
echo '{ "multiline-fields" : "true" }' > /tmp/northwind/options.json
./bin/neo4j-etl export \
--rdbms:url jdbc:mysql://localhost:5433/northwind?autoReconnect=true&useSSL=false \
--rdbms:user neo4j --rdbms:password neo4j \
--import-tool $NEO4J_HOME/bin \
--options-file /tmp/northwind/options.json \
--csv-directory /tmp/northwind \
--destination $NEO4J_HOME/data/databases/graph.db/ \
--quote '"' --force
PostgreSQL
Download, start and configure the docker container with PostgreSQL 9.6.2:
docker pull postgres
docker run --name neo4j-etl-postgres -e POSTGRES_USER=neo4j -e POSTGRES_PASSWORD=neo4j -d -p 5433:5432 postgres
docker run -it --rm --link neo4j-etl-postgres:postgres postgres psql -h postgres -U neo4j
Password for user neo4j:
psql (9.6.2)
Type "help" for help.
neo4j=# DROP DATABASE IF EXISTS northwind;
neo4j=# CREATE DATABASE northwind WITH OWNER 'neo4j' ENCODING 'UTF8' LC_COLLATE = 'en_US.utf8' LC_CTYPE = 'en_US.utf8';
neo4j=# \q
Load the database via the following sql script: northwind.sql
export NEO4J_HOME=/path/to/neo4j-enterprise-3.4.0
mkdir -p /tmp/northwind
echo '{"multiline-fields":"true"}' > /tmp/northwind/options.json
./bin/neo4j-etl export \
--rdbms:url jdbc:postgresql://localhost:5433/northwind?ssl=false \
--rdbms:user neo4j --rdbms:password neo4j \
--import-tool $NEO4J_HOME/bin \
--options-file /tmp/northwind/options.json \
--csv-directory /tmp/northwind \
--destination $NEO4J_HOME/data/databases/graph.db/ \
--quote '"' --force
Oracle
Download, start and configure the docker container with Oracle XE 11g:
docker pull wnameless/oracle-xe-11g
docker run --name neo4j-etl-oracle -d -p 49160:22 -p 49161:1521 wnameless/oracle-xe-11g
ssh root@localhost -p 49160
root@localhost's password: admin
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.9.13-moby x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon May 1 17:32:48 2017 from 172.17.0.1
root@692c446a274b:~# wget https://raw.githubusercontent.com/neo4j-contrib/neo4j-etl/master/neo4j-etl-it/src/main/resources/scripts/oracle/northwind.sql
root@692c446a274b:~# sqlplus system/oracle
SQL> CREATE USER northwind IDENTIFIED BY northwind;
SQL> GRANT DBA TO northwind;
SQL> CONN northwind/northwind;
SQL> SET sqlblanklines ON;
SQL> @northwind.sql
SQL> quit;
root@692c446a274b:~# exit
export NEO4J_HOME=/path/to/neo4j-enterprise-3.4.0
mkdir -p /tmp/northwind
echo '{"multiline-fields":"true"}' > /tmp/northwind/options.json
./bin/neo4j-etl export \
--rdbms:url jdbc:oracle:thin:@localhost:49161:XE \
--rdbms:user northwind --rdbms:password northwind \
--rdbms:schema northwind \
--import-tool $NEO4J_HOME/bin \
--options-file /tmp/northwind/options.json \
--csv-directory /tmp/northwind \
--destination $NEO4J_HOME/data/databases/graph.db/ \
--quote '"' --force
--driver /tmp/ojdbc6-11.2.0.3.jar
Microsoft SQL
Download, start and configure the docker container with Microsoft SQL Server:
docker run --name neo4j-etl-mssql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd!' -p 1433:1433 -d microsoft/mssql-server-linux
If you want to connect to Microsoft SQL client console then you can run the following command:
docker exec -it neo4j-etl-mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'Passw0rd!' -d
export NEO4J_HOME=/path/to/neo4j-enterprise-3.4.0
mkdir -p /tmp/wideworldimporters
echo '{"multiline-fields":"true"}' > /tmp/wideworldimporters/options.json
./bin/neo4j-etl export \
--rdbms:password "Passw0rd!" \
--rdbms:user sa \
--rdbms:url "jdbc:sqlserver://localhost:1433;databaseName=WideWorldImporters" \
--import-tool $NEO4J_HOME/bin \
--options-file /tmp/wideworldimporters/options.json \
--csv-directory /tmp/wideworldimporters \
--destination $NEO4J_HOME/data/databases/graph.db/ \
--driver /tmp/mssql-jdbc-6.1.0.jre8.jar \
How to import World Wide Importers database into a MS SQL server Docker instance
# Create docker instance for MS-SQL Server
docker run --name mssql-etl \
-e MSSQL_COLLATION=Latin1_General_100_CI_AS \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v /tmp:/tmp \
-d microsoft/mssql-server-linux:2017-latest
# Download World Wide Importers backup file
wget https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak
# Create a backup directory
sudo docker exec -it mssql-etl mkdir /var/opt/mssql/backup
# Load backup file into the container
sudo docker cp WideWorldImporters-Full.bak mssql-etl:/var/opt/mssql/backup
# Restore Wide World Importers database
sudo docker exec -it mssql-etl /opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U SA \
-P '' \
-Q 'RESTORE FILELISTONLY FROM DISK = "/var/opt/mssql/backup/WideWorldImporters-Full.bak"' \
| tr -s ' ' \
| cut -d ' ' -f 1-2
sudo docker exec -it mssql-etl /opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U SA \
-P '' \
-Q 'RESTORE DATABASE WideWorldImporters FROM DISK = "/var/opt/mssql/backup/WideWorldImporters-Full.bak" WITH MOVE "WWI_Primary" TO "/var/opt/mssql/data/WideWorldImporters.mdf", MOVE "WWI_UserData" TO "/var/opt/mssql/data/WideWorldImporters_userdata.ndf", MOVE "WWI_Log" TO "/var/opt/mssql/data/WideWorldImporters.ldf", MOVE "WWI_InMemory_Data_1" TO "/var/opt/mssql/data/WideWorldImporters_InMemory_Data_1"'
Capabilities
Generic relational database mapping based on the following rules
A table with a foreign key is treated as a Join and imported as a node with a relationship
Ex: Person -> Address
is imported as (Person)-[:ADDRESS_ID]->(Address)
A table that has two foreign keys is imported as a JoinTable and imported as a relationship
Ex: Student <- Student_Course -> Course
is imported as(Student) -[:STUDENT_COURSE]-> (Course)
A table that has more than two foreign keys is treated as an intermediate node and imported as node with multiple relationships
Ex: Order_Detail -> Shipping_Address, Order_Detail -> Payment_Information, Order_Detail -> Shipment_Instructions
is imported as
(Shipping_Address) -[:SHIPPING]-> (Order_Detail)
(Payment_Information) -[:PAYMENT]-> (Order_Detail)
(Shipment_Instructions) -[:SHIPMENT]-> (Order_Detail)
Resolve relationships through composite keys.
Support most of the data types.
TinyInt can be imported as either Byte or as a Boolean (This is to support boolean values being saved in mysql as TinyInt)
Dates are imported as String
Blobs are skipped while importing until the import-tool supports binary array data.
Decimal to be confirmed.
Relationship names can either take column name or the table that is being referred to
Filter tables that you want to include or exclude using --include
and --exclude
TODO: Filter columns that you want to include or exclude using --include
and --exclude
TODO: Retaining natural keys(marked as PrimaryKeys and ForeignKeys) as needed using flag
A Foreign Key is usually used to create a relationship between 2 nodes without being saved as a property.
With this flag, the node would keep that value as a property.
Ex: A loan has the SSN of the loan applicant which would normally be used to connect the Loan
and Person
nodes.
With this flag the Loan
node will also keep the SSN
as a property.
Edit Mapping via UI
A Neo4j-ETL graph application can be added Neo4j Desktop which allows visual editing of the mapping and interactive import.
The UI allows you to change and set you preferred label names, property names and types, relationship types, with a preview of the resulting graph.
Exporting Data (export)
Last updated 2018-05-30 13:24:51 CEST
你可能感兴趣的:(数据库)
Google earth studio 简介
陟彼高冈yu
旅游
GoogleEarthStudio是一个基于Web的动画工具,专为创作使用GoogleEarth数据的动画和视频而设计。它利用了GoogleEarth强大的三维地图和卫星影像数据库,使用户能够轻松地创建逼真的地球动画、航拍视频和动态地图可视化。网址为https://www.google.com/earth/studio/。GoogleEarthStudio是一个基于Web的动画工具,专为创作使用G
关于提高复杂业务逻辑代码可读性的思考
编程经验分享
开发经验 java 数据库 开发语言
目录前言需求场景常规写法拆分方法领域对象总结前言实际工作中大部分时间都是在写业务逻辑,一般都是三层架构,表示层(Controller)接收客户端请求,并对入参做检验,业务逻辑层(Service)负责处理业务逻辑,一般开发都是在这一层中写具体的业务逻辑。数据访问层(Dao)是直接和数据库交互的,用于查数据给业务逻辑层,或者是将业务逻辑层处理后的数据写入数据库。简单的增删改查接口不用多说,基本上写好一
SQL Server_查询某一数据库中的所有表的内容
qq_42772833
SQL Server 数据库 sqlserver
1.查看所有表的表名要列出CrabFarmDB数据库中的所有表(名),可以使用以下SQL语句:USECrabFarmDB;--切换到目标数据库GOSELECTTABLE_NAMEFROMINFORMATION_SCHEMA.TABLESWHERETABLE_TYPE='BASETABLE';对这段SQL脚本的解释:SELECTTABLE_NAME:这个语句的作用是从查询结果中选择TABLE_NAM
深入理解 MultiQueryRetriever:提升向量数据库检索效果的强大工具
nseejrukjhad
数据库 python
深入理解MultiQueryRetriever:提升向量数据库检索效果的强大工具引言在人工智能和自然语言处理领域,高效准确的信息检索一直是一个关键挑战。传统的基于距离的向量数据库检索方法虽然广泛应用,但仍存在一些局限性。本文将介绍一种创新的解决方案:MultiQueryRetriever,它通过自动生成多个查询视角来增强检索效果,提高结果的相关性和多样性。MultiQueryRetriever的工
MongoDB Oplog 窗口
喝醉酒的小白
MongoDB 运维
在MongoDB中,oplog(操作日志)是一个特殊的日志系统,用于记录对数据库的所有写操作。oplog允许副本集成员(通常是从节点)应用主节点上已经执行的操作,从而保持数据的一致性。它是MongoDB副本集实现数据复制的基础。MongoDBOplog窗口oplog窗口是指在MongoDB副本集中,从节点可以用来同步数据的时间范围。这个窗口通常由以下因素决定:Oplog大小:oplog的大小是有限
python os 环境变量
CV矿工
python 开发语言 numpy
环境变量:环境变量是程序和操作系统之间的通信方式。有些字符不宜明文写进代码里,比如数据库密码,个人账户密码,如果写进自己本机的环境变量里,程序用的时候通过os.environ.get()取出来就行了。os.environ是一个环境变量的字典。环境变量的相关操作importos"""设置/修改环境变量:os.environ[‘环境变量名称’]=‘环境变量值’#其中key和value均为string类
【PG】常见数据库、表属性设置
江无羡
数据库
PG的常见属性配置方法数据库复制、备份相关表的复制标识单表操作批量表操作链接数据库复制、备份相关表的复制标识单表操作通过ALTER语句单独更改一张表的复制标识。ALTERTABLE[tablename]REPLICAIDENTITYFULL;批量表操作通过代码块的方式,对某个schema中的所有表一起更新其复制标识。SELECTtablename,CASErelreplidentWHEN'd'TH
nosql数据库技术与应用知识点
皆过客,揽星河
NoSQL nosql 数据库 大数据 数据分析 数据结构 非关系型数据库
Nosql知识回顾大数据处理流程数据采集(flume、爬虫、传感器)数据存储(本门课程NoSQL所处的阶段)Hdfs、MongoDB、HBase等数据清洗(入仓)Hive等数据处理、分析(Spark、Flink等)数据可视化数据挖掘、机器学习应用(Python、SparkMLlib等)大数据时代存储的挑战(三高)高并发(同一时间很多人访问)高扩展(要求随时根据需求扩展存储)高效率(要求读写速度快)
insert into select 主键自增_mybatis拦截器实现主键自动生成
weixin_39521651
insert into select 主键自增 mybatis delete返回值 mybatis insert返回主键 mybatis insert返回对象 mybatis plus insert返回主键 mybatis plus 插入生成id
前言前阵子和朋友聊天,他说他们项目有个需求,要实现主键自动生成,不想每次新增的时候,都手动设置主键。于是我就问他,那你们数据库表设置主键自动递增不就得了。他的回答是他们项目目前的id都是采用雪花算法来生成,因此为了项目稳定性,不会切换id的生成方式。朋友问我有没有什么实现思路,他们公司的orm框架是mybatis,我就建议他说,不然让你老大把mybatis切换成mybatis-plus。mybat
关于Mysql 中 Row size too large (> 8126) 错误的解决和理解
秋刀prince
mysql mysql 数据库
提示:啰嗦一嘴,数据库的任何操作和验证前,一定要记得先备份!!!不会有错;文章目录问题发现一、问题导致的可能原因1、页大小2、行格式2.1compact格式2.2Redundant格式2.3Dynamic格式2.4Compressed格式3、BLOB和TEXT列二、解决办法1、修改页大小(不推荐)2、修改行格式3、修改数据类型为BLOB和TEXT列4、其他优化方式(可以参考使用)4.1合理设置数据
Java爬虫框架(一)--架构设计
狼图腾-狼之传说
java 框架 java 任务 html解析器 存储 电子商务
一、架构图那里搜网络爬虫框架主要针对电子商务网站进行数据爬取,分析,存储,索引。爬虫:爬虫负责爬取,解析,处理电子商务网站的网页的内容数据库:存储商品信息索引:商品的全文搜索索引Task队列:需要爬取的网页列表Visited表:已经爬取过的网页列表爬虫监控平台:web平台可以启动,停止爬虫,管理爬虫,task队列,visited表。二、爬虫1.流程1)Scheduler启动爬虫器,TaskMast
MongoDB知识概括
GeorgeLin98
持久层 mongodb
MongoDB知识概括MongoDB相关概念单机部署基本常用命令索引-IndexSpirngDataMongoDB集成副本集分片集群安全认证MongoDB相关概念业务应用场景:传统的关系型数据库(如MySQL),在数据操作的“三高”需求以及应对Web2.0的网站需求面前,显得力不从心。解释:“三高”需求:①Highperformance-对数据库高并发读写的需求。②HugeStorage-对海量数
Mongodb Error: queryTxt ETIMEOUT xxxx.wwwdz.mongodb.net
佛一脚
error react mongodb 数据库
背景每天都能遇到奇怪的问题,做个记录,以便有缘人能得到帮助!换了一台电脑开发nextjs程序。需要连接mongodb数据,对数据进行增删改查。上一台电脑好好的程序,新电脑死活连不上mongodb数据库。同一套代码,没任何修改,搞得我怀疑人生了,打开浏览器进入mongodb官网毫无问题,也能进入线上系统查看数据,网络应该是没问题。于是我尝试了一下手机热点,这次代码能正常跑起来,连接数据库了!!!是不
入门MySQL——查询语法练习
K_un
前言:前面几篇文章为大家介绍了DML以及DDL语句的使用方法,本篇文章将主要讲述常用的查询语法。其实MySQL官网给出了多个示例数据库供大家实用查询,下面我们以最常用的员工示例数据库为准,详细介绍各自常用的查询语法。1.员工示例数据库导入官方文档员工示例数据库介绍及下载链接:https://dev.mysql.com/doc/employee/en/employees-installation.h
博客网站制作教程
2401_85194651
java maven
首先就是技术框架:后端:Java+SpringBoot数据库:MySQL前端:Vue.js数据库连接:JPA(JavaPersistenceAPI)1.项目结构blog-app/├──backend/│├──src/main/java/com/example/blogapp/││├──BlogApplication.java││├──config/│││└──DatabaseConfig.java
ubuntu安装wordpress
lissettecarlr
1安装nginx网上安装方式很多,这就就直接用apt-get了apt-getinstallnginx不用启动啥,然后直接在浏览器里面输入IP:80就能看到nginx的主页了。如果修改了一些配置可以使用下列命令重启一下systemctlrestartnginx.service2安装mysql输入安装前也可以更新一下软件源,在安装过程中将会让你输入数据库的密码。sudoapt-getinstallmy
深入浅出 -- 系统架构之负载均衡Nginx的性能优化
xiaoli8748_软件开发
系统架构 系统架构 负载均衡 nginx
一、Nginx性能优化到这里文章的篇幅较长了,最后再来聊一下关于Nginx的性能优化,主要就简单说说收益最高的几个优化项,在这块就不再展开叙述了,毕竟影响性能都有多方面原因导致的,比如网络、服务器硬件、操作系统、后端服务、程序自身、数据库服务等,对于性能调优比较感兴趣的可以参考之前《JVM性能调优》中的调优思想。优化一:打开长连接配置通常Nginx作为代理服务,负责分发客户端的请求,那么建议开启H
【RabbitMQ 项目】服务端:数据管理模块之绑定管理
月夜星辉雪
rabbitmq 分布式
文章目录一.编写思路二.代码实践一.编写思路定义绑定信息类交换机名称队列名称绑定关键字:交换机的路由交换算法中会用到没有是否持久化的标志,因为绑定是否持久化取决于交换机和队列是否持久化,只有它们都持久化时绑定才需要持久化。绑定就好像一根绳子,两端连接着交换机和队列,当一方不存在,它就没有存在的必要了定义绑定持久化类构造函数:如果数据库文件不存在则创建,打开数据库,创建binding_table插入
计算机毕业设计PHP仓储综合管理系统(源码+程序+VUE+lw+部署)
java毕设程序源码王哥
php 课程设计 vue.js
该项目含有源码、文档、程序、数据库、配套开发软件、软件安装教程。欢迎交流项目运行环境配置:phpStudy+Vscode+Mysql5.7+HBuilderX+Navicat11+Vue+Express。项目技术:原生PHP++Vue等等组成,B/S模式+Vscode管理+前后端分离等等。环境需要1.运行环境:最好是小皮phpstudy最新版,我们在这个版本上开发的。其他版本理论上也可以。2.开发
3.增删改查--连接查询
问女何所忆
关系型数据库的一个特点就是,多张表之间存在关系,以致于我们可以连接多张表进行查询操作,所以连接查询会是关系型数据库中最常见的操作。连接查询主要分为三种,交叉连接、内连接和外连接,我们一个个说。1、交叉连接交叉连接其实连接查询的第一个阶段,它简单表现为两张表的笛卡尔积形式,具体例子:如果你没学过数学中的笛卡尔积概念,你可以这样简单的理解这里的交叉连接:两张表的交叉连接就是一个连接合并的过程,T1表中
docker from指令的含义_多个FROM-含义
weixin_39722188
docker from指令的含义
小编典典什么是基本图片?一组文件,加上EXPOSE端口ENTRYPOINT和CMD。您可以添加文件并基于该基础图像构建新图像,Dockerfile并以FROM指令开头:后面提到的图像FROM是新图像的“基础图像”。这是否意味着如果我neo4j/neo4j在FROM指令中声明,则在运行映像时,neo数据库将自动运行并且可在端口7474的容器中使用?仅当您不覆盖CMD和时ENTRYPOINT。但是图像
Redis:缓存击穿
我的程序快快跑啊
缓存 redis java
缓存击穿(热点key):部分key(被高并发访问且缓存重建业务复杂的)失效,无数请求会直接到数据库,造成巨大压力1.互斥锁:可以保证强一致性线程一:未命中之后,获取互斥锁,再查询数据库重建缓存,写入缓存,释放锁线程二:查询未命中,未获得锁(已由线程一获得),等待一会,缓存命中互斥锁实现方式:redis中setnxkeyvalue:改变对应key的value,仅当value不存在时执行,以此来实现互
mysql学习教程,从入门到精通,TOP 和MySQL LIMIT 子句(15)
知识分享小能手
大数据 数据库 MySQL mysql 学习 oracle 数据库 开发语言 adb 大数据
1、TOP和MySQLLIMIT子句内容在SQL中,不同的数据库系统对于限制查询结果的数量有不同的实现方式。TOP关键字主要用于SQLServer和Access数据库中,而LIMIT子句则主要用于MySQL、PostgreSQL(通过LIMIT/OFFSET语法)、SQLite等数据库中。下面将分别详细介绍这两个功能的语法、语句以及案例。1.1、TOP子句(SQLServer和Access)1.1
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your
†徐先森®
Oracle数据库 Web相关 错误集
createtablestudents(idintunsignedprimarykeyauto_increment,namevarchar(50)notnull,ageintunsigned,highdecimal(3,2),genderenum('男','女','中性','保密','妖')default'保密',cls_idintunsigned);在对数据库插入如上带有中文带有默认值的字段的时
Redis 有哪些危险命令?如何防范?
花小疯
redis 缓存 数据库 危险命令 大数据
Redis有哪些危险命令?Redis的危险命令主要有以下几个:1.keys客户端可查询出所有存在的键。2.flushdb删除Redis中当前所在数据库中的所有记录,并且此命令从不会执行失败。3.flushall删除Redis中所有数据库中的所有记录,不止是当前所在数据库,并且此命令从不会执行失败。4.config客户端可修改Redis配置。怎么禁用和重命名危险命令?看下redis.conf默认配置
【Golang】 Golang 的 GORM 库中的 Rows 函数
不爱洗脚的小滕
golang 开发语言 后端
文章目录前言一、Rows函数解释二、代码实现三、总结前言在使用Go语言进行数据库操作时,GORM(GoObject-RelationalMapping)库是一个常用的工具。它提供了一种简洁和强大的方式来处理数据库操作。本文将介绍GORM库中的Rows函数,这是一个用于执行原生SQL查询并返回结果的函数。一、Rows函数解释在GORM库中,Rows函数用于执行原生SQL查询并返回*sql.Rows结
接口测试如何设计测试用例
李蕴Ronnie
接口测试用例设计方式针对每个必填参数,都设计一条参数为空的测试用例必填参数不存在传的参数值在数据库中不存在添加数据接口,传入已有的数据重复添加编辑数据接口,各个字段分别编辑,合并编辑参数数据类型限制,针对每个参数设计一条参数值类型不符合的逆向用例参数自身取值范围,针对所有参数,设计一条每个参数值在取值范围内最大值的正向测试用例是否满足前提条件(token、headers),几个前提条件几条用例针对
Hadoop架构
henan程序媛
hadoop 大数据 分布式
一、案列分析1.1案例概述现在已经进入了大数据(BigData)时代,数以万计用户的互联网服务时时刻刻都在产生大量的交互,要处理的数据量实在是太大了,以传统的数据库技术等其他手段根本无法应对数据处理的实时性、有效性的需求。HDFS顺应时代出现,在解决大数据存储和计算方面有很多的优势。1.2案列前置知识点1.什么是大数据大数据是指无法在一定时间范围内用常规软件工具进行捕捉、管理和处理的大量数据集合,
非关系型数据库
天秤-white
nosql
一、为什么要用Nosql1.单机MySQL的时代。一个基本的网站访问量一般不会太大,单个数据库完全足够。那时候更多使用的静态网页html,服务器根本没有太大压力。这时候网站的瓶颈是什么?-数据量如果太大,一个机器放不下。-数据量太大需要建立数据的索引(B+Tree),一个服务器内存放不下。-访问量读写混合,一个服务器承受不了。2.memcached缓存+MySQL+垂直拆分(读写分离)。网站80%
六、全局锁和表锁:给表加个字段怎么有这么多阻碍
nieniemin
数据库锁设计的初衷是处理并发问题。作为多用户共享的资源,当出现并发访问的时候,数据库需要合理地控制资源的访问规则。而锁就是用来实现这些访问规则的重要数据结构。根据加锁的范围,MySQL里面的锁大致可以分成全局锁、表级锁和行锁三类。6.1全局锁全局锁就是对整个数据库实例加锁。MySQL提供了一个加全局读锁的方法,命令是Flushtableswithreadlock(FTWRL)。当你需要让整个库处于
SQL的各种连接查询
xieke90
UNION ALL UNION 外连接 内连接 JOIN
一、内连接
概念:内连接就是使用比较运算符根据每个表共有的列的值匹配两个表中的行。
内连接(join 或者inner join )
SQL语法:
select * fron
java编程思想--复用类
百合不是茶
java 继承 代理 组合 final类
复用类看着标题都不知道是什么,再加上java编程思想翻译的比价难懂,所以知道现在才看这本软件界的奇书
一:组合语法:就是将对象的引用放到新类中即可
代码:
package com.wj.reuse;
/**
*
* @author Administrator 组
[开源与生态系统]国产CPU的生态系统
comsci
cpu
计算机要从娃娃抓起...而孩子最喜欢玩游戏....
要让国产CPU在国内市场形成自己的生态系统和产业链,国家和企业就不能够忘记游戏这个非常关键的环节....
投入一些资金和资源,人力和政策,让游
JVM内存区域划分Eden Space、Survivor Space、Tenured Gen,Perm Gen解释
商人shang
jvm内存
jvm区域总体分两类,heap区和非heap区。heap区又分:Eden Space(伊甸园)、Survivor Space(幸存者区)、Tenured Gen(老年代-养老区)。 非heap区又分:Code Cache(代码缓存区)、Perm Gen(永久代)、Jvm Stack(java虚拟机栈)、Local Method Statck(本地方法栈)。
HotSpot虚拟机GC算法采用分代收
页面上调用 QQ
oloz
qq
<A href="tencent://message/?uin=707321921&Site=有事Q我&Menu=yes">
<img style="border:0px;" src=http://wpa.qq.com/pa?p=1:707321921:1></a>
一些问题
文强chu
问题
1.eclipse 导出 doc 出现“The Javadoc command does not exist.” javadoc command 选择 jdk/bin/javadoc.exe 2.tomcate 配置 web 项目 .....
SQL:3.mysql * 必须得放前面 否则 select&nbs
生活没有安全感
小桔子
生活 孤独 安全感
圈子好小,身边朋友没几个,交心的更是少之又少。在深圳,除了男朋友,没几个亲密的人。不知不觉男朋友成了唯一的依靠,毫不夸张的说,业余生活的全部。现在感情好,也很幸福的。但是说不准难免人心会变嘛,不发生什么大家都乐融融,发生什么很难处理。我想说如果不幸被分手(无论原因如何),生活难免变化很大,在深圳,我没交心的朋友。明
php 基础语法
aichenglong
php 基本语法
1 .1 php变量必须以$开头
<?php
$a=” b”;
echo
?>
1 .2 php基本数据库类型 Integer float/double Boolean string
1 .3 复合数据类型 数组array和对象 object
1 .4 特殊数据类型 null 资源类型(resource) $co
mybatis tools 配置详解
AILIKES
mybatis
MyBatis Generator中文文档
MyBatis Generator中文文档地址:
http://generator.sturgeon.mopaas.com/
该中文文档由于尽可能和原文内容一致,所以有些地方如果不熟悉,看中文版的文档的也会有一定的障碍,所以本章根据该中文文档以及实际应用,使用通俗的语言来讲解详细的配置。
本文使用Markdown进行编辑,但是博客显示效
继承与多态的探讨
百合不是茶
JAVA面向对象 继承 对象
继承 extends 多态
继承是面向对象最经常使用的特征之一:继承语法是通过继承发、基类的域和方法 //继承就是从现有的类中生成一个新的类,这个新类拥有现有类的所有extends是使用继承的关键字:
在A类中定义属性和方法;
class A{
//定义属性
int age;
//定义方法
public void go
JS的undefined与null的实例
bijian1013
JavaScript JavaScript
<form name="theform" id="theform">
</form>
<script language="javascript">
var a
alert(typeof(b)); //这里提示undefined
if(theform.datas
TDD实践(一)
bijian1013
java 敏捷 TDD
一.TDD概述
TDD:测试驱动开发,它的基本思想就是在开发功能代码之前,先编写测试代码。也就是说在明确要开发某个功能后,首先思考如何对这个功能进行测试,并完成测试代码的编写,然后编写相关的代码满足这些测试用例。然后循环进行添加其他功能,直到完全部功能的开发。
[Maven学习笔记十]Maven Profile与资源文件过滤器
bit1129
maven
什么是Maven Profile
Maven Profile的含义是针对编译打包环境和编译打包目的配置定制,可以在不同的环境上选择相应的配置,例如DB信息,可以根据是为开发环境编译打包,还是为生产环境编译打包,动态的选择正确的DB配置信息
Profile的激活机制
1.Profile可以手工激活,比如在Intellij Idea的Maven Project视图中可以选择一个P
【Hive八】Hive用户自定义生成表函数(UDTF)
bit1129
hive
1. 什么是UDTF
UDTF,是User Defined Table-Generating Functions,一眼看上去,貌似是用户自定义生成表函数,这个生成表不应该理解为生成了一个HQL Table, 貌似更应该理解为生成了类似关系表的二维行数据集
2. 如何实现UDTF
继承org.apache.hadoop.hive.ql.udf.generic
tfs restful api 加auth 2.0认计
ronin47
目前思考如何给tfs的ngx-tfs api增加安全性。有如下两点:
一是基于客户端的ip设置。这个比较容易实现。
二是基于OAuth2.0认证,这个需要lua,实现起来相对于一来说,有些难度。
现在重点介绍第二种方法实现思路。
前言:我们使用Nginx的Lua中间件建立了OAuth2认证和授权层。如果你也有此打算,阅读下面的文档,实现自动化并获得收益。SeatGe
jdk环境变量配置
byalias
java jdk
进行java开发,首先要安装jdk,安装了jdk后还要进行环境变量配置:
1、下载jdk(http://java.sun.com/javase/downloads/index.jsp),我下载的版本是:jdk-7u79-windows-x64.exe
2、安装jdk-7u79-windows-x64.exe
3、配置环境变量:右击"计算机"-->&quo
《代码大全》表驱动法-Table Driven Approach-2
bylijinnan
java
package com.ljn.base;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.Collections;
import java.uti
SQL 数值四舍五入 小数点后保留2位
chicony
四舍五入
1.round() 函数是四舍五入用,第一个参数是我们要被操作的数据,第二个参数是设置我们四舍五入之后小数点后显示几位。
2.numeric 函数的2个参数,第一个表示数据长度,第二个参数表示小数点后位数。
例如:
select cast(round(12.5,2) as numeric(5,2))  
c++运算符重载
CrazyMizzz
C++
一、加+,减-,乘*,除/ 的运算符重载
Rational operator*(const Rational &x) const{
return Rational(x.a * this->a);
}
在这里只写乘法的,加减除的写法类似
二、<<输出,>>输入的运算符重载
&nb
hive DDL语法汇总
daizj
hive 修改列 DDL 修改表
hive DDL语法汇总
1、对表重命名
hive> ALTER TABLE table_name RENAME TO new_table_name;
2、修改表备注
hive> ALTER TABLE table_name SET TBLPROPERTIES ('comment' = new_comm
jbox使用说明
dcj3sjt126com
Web
参考网址:http://www.kudystudio.com/jbox/jbox-demo.html jBox v2.3 beta [
点击下载]
技术交流QQGroup:172543951 100521167
[2011-11-11] jBox v2.3 正式版
- [调整&修复] IE6下有iframe或页面有active、applet控件
UISegmentedControl 开发笔记
dcj3sjt126com
// typedef NS_ENUM(NSInteger, UISegmentedControlStyle) {
// UISegmentedControlStylePlain, // large plain
&
Slick生成表映射文件
ekian
scala
Scala添加SLICK进行数据库操作,需在sbt文件上添加slick-codegen包
"com.typesafe.slick" %% "slick-codegen" % slickVersion
因为我是连接SQL Server数据库,还需添加slick-extensions,jtds包
"com.typesa
ES-TEST
gengzg
test
package com.MarkNum;
import java.io.IOException;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import javax.servlet.ServletException;
import javax.servlet.annotation
为何外键不再推荐使用
hugh.wang
mysql DB
表的关联,是一种逻辑关系,并不需要进行物理上的“硬关联”,而且你所期望的关联,其实只是其数据上存在一定的联系而已,而这种联系实际上是在设计之初就定义好的固有逻辑。
在业务代码中实现的时候,只要按照设计之初的这种固有关联逻辑来处理数据即可,并不需要在数据库层面进行“硬关联”,因为在数据库层面通过使用外键的方式进行“硬关联”,会带来很多额外的资源消耗来进行一致性和完整性校验,即使很多时候我们并不
领域驱动设计
julyflame
VO DAO 设计模式 DTO po
概念:
VO(View Object):视图对象,用于展示层,它的作用是把某个指定页面(或组件)的所有数据封装起来。
DTO(Data Transfer Object):数据传输对象,这个概念来源于J2EE的设计模式,原来的目的是为了EJB的分布式应用提供粗粒度的数据实体,以减少分布式调用的次数,从而提高分布式调用的性能和降低网络负载,但在这里,我泛指用于展示层与服务层之间的数据传输对
单例设计模式
hm4123660
java Singleton 单例设计模式 懒汉式 饿汉式
单例模式是一种常用的软件设计模式。在它的核心结构中只包含一个被称为单例类的特殊类。通过单例模式可以保证系统中一个类只有一个实例而且该实例易于外界访问,从而方便对实例个数的控制并节约系统源。如果希望在系统中某个类的对象只能存在一个,单例模式是最好的解决方案。
&nb
logback
zhb8015
log logback
一、logback的介绍
Logback是由log4j创始人设计的又一个开源日志组件。logback当前分成三个模块:logback-core,logback- classic和logback-access。logback-core是其它两个模块的基础模块。logback-classic是log4j的一个 改良版本。此外logback-class
整合Kafka到Spark Streaming——代码示例和挑战
Stark_Summer
spark storm zookeeper PARALLELISM processing
作者Michael G. Noll是瑞士的一位工程师和研究员,效力于Verisign,是Verisign实验室的大规模数据分析基础设施(基础Hadoop)的技术主管。本文,Michael详细的演示了如何将Kafka整合到Spark Streaming中。 期间, Michael还提到了将Kafka整合到 Spark Streaming中的一些现状,非常值得阅读,虽然有一些信息在Spark 1.2版
spring-master-slave-commondao
王新春
DAO spring dataSource slave master
互联网的web项目,都有个特点:请求的并发量高,其中请求最耗时的db操作,又是系统优化的重中之重。
为此,往往搭建 db的 一主多从库的 数据库架构。作为web的DAO层,要保证针对主库进行写操作,对多个从库进行读操作。当然在一些请求中,为了避免主从复制的延迟导致的数据不一致性,部分的读操作也要到主库上。(这种需求一般通过业务垂直分开,比如下单业务的代码所部署的机器,读去应该也要从主库读取数