Transaction propagation
There are other matters in the method call with a transaction, this time there have been communication services, such as now I have 100 dollars, to buy books, one 60 one 50, when the need to buy inventory is reduced, so the time to buy books also transaction, the transaction is less inventory, which is the spread of the transaction,
the way there are seven kinds of transaction propagation is important in two ways, with propogetion = require and require_new
former is always the same thing, which is pending before, open something new, then the former is the overall consistency, which is local consistency,
the result is that the former one also can not buy, which you can buy.
Transaction Isolation
Database transaction concurrency issues
Dirty read: I read the current transaction data updated by other transactions but not yet committed.
Non-repeatable read: Transaction 1 reads the data, change data transactions 2, the data are inconsistent with the first reading after the transaction 1 again once read.
Magic Reading: Transaction 2 added the line, when read by transaction 1 again found more than read one line
read uncommitted
Read Committed
Repeatable read (equivalent to the time to read the data on the lock, but you can not insert data update )
serialization
Use set transaction isolation level isolation
Persistence redis two forms of
RDB full amount: snapshot, all memory data. fork child process does not affect the program, high efficiency, fast. Save time and faster recovery
Cons: not great when bad data. The harsh conditions are met, if the condition is not backed up not reached full backup of the middle of the data will be lost.
AOF increment (complementary)
ways logs, more granular.
All writes re-run again
RDB, then the data is restored on the line
When is it appropriate to create an index:
the index of disadvantage is that in addition to the query faster, the other writes will be slower, because it is packed structure. The index is required, taking up disk space.
Frequent queries
foreign key
composite index
after the first packet sequencing,
JVM's garbage collection mechanism
GC
The heap
From two
Reference counting (no way to deal with circular references)
replication algorithm (the young generation), minner double space. High efficiency, direct copy, no memory fragmentation
mark sweep algorithm: the old era, clear unmarked, clear after the first mark, memory fragmentation.
Advantages: No additional space.
Also occurs in older years (full gc): mark compression
cost disadvantage is that moving objects
redis usage scenarios in the project;
Store user information when such data is not recommended to use string, because the need to complete the sequence of user information string (id, name.age) get all the time will de-serialization, serialization and de-serialization of Yes IO operations, and so is the use of hash
Add friends can not be repeated twice, two common friends, intersection, etc.