转自:http://wiki.eclipse.org/EclipseLink/Release/2.5/JPA21
This page contains a summary of the major features supported in EclipseLink that implements the JPA 2.1 (JSR 338) specification requirements. The features and examples on this page do not represent a complete list. For more information, please see: the JSR 338 page.
Until JPA 2.1, performing deletes or updates was not available using the Criteria API. Through theaddition of CriteriaUpdate/CriteriaDelete classes, support for bulkupdate/delete queries has now been added.
The following example will update the salary and status, of all Employees who make less than 10000$, and give them a raise.
CriteriaUpdateq = cb.createCriteriaUpdate(Employee.class); Root emp = q.from(Employee.class); .set(e.get(Employee_.salary), cb.prod(e.get(Employee_.salary), 1.1f)) .set(e.get(Employee_.status), "full_time") .where(cb.lt(emp.get(Emploee_.salary), 10000));
The following Java Persistence query language update statement is equivalent.
UPDATE Employee e SET e.salary = e.salary * 1.1, e.status = "full_time" WHERE e.salary < 10000
The following example deletes all the PhoneNumbers that are no longer in service
CriteriaDeleteq = cb.createCriteriaDelete(PhoneNumber.class); Root p = q.from(PhoneNumber.class); q.where(cb.equal(p.get(PhoneNumber_.status), "out_of_service"),
The following Java Persistence query language delete statement is equivalent.
DELETE FROM PhoneNumber p WHERE p.status = 'out_of_service'
JPA specification 2.1 has introduced support for executing Stored Procedure calls. This includes a newStoredProcedureQuery API and Named Stored Procedure Queries (pre-existing portions of code on the database).
All the stored procedure examples below assume stored procedures already exist on the DB. Stored procedure creation is performed differently on different Databases. All the following example Stored procedure creation is using MySQL syntax (unless otherwise specified).
Stored procedure definition on MySQL:
CREATE PROCEDURE getIds() BEGIN SELECT ID FROM CUSTOMER ORDER BY ID ASC; END !
Build the query:
StoredProcedureQuery spq = em.createStoredProcedureQuery("getIds", Customer.class);
Execute the query:
List customers = spq.getResultList();
Alternatively, users can call spq.execute() directly (which is what getResultList() will call behind the scenes). The execute method will return a boolean indicating true if a result set is returned and false otherwise.
boolean result = spq.execute(); if (result == true) { customers = spq.getResultList(); } else { // Handle the false for no result set returned, e.g. throw new RuntimeException("No result set(s) returned from the stored procedure"); }
Stored procedure definition on MySQL:
CREATE PROCEDURE Update_Address_Postal_Code (new_p_code_v VARCHAR(255), old_p_code_v VARCHAR(255)) BEGIN UPDATE ADDRESS SET P_CODE = new_p_code_v WHERE P_CODE = old_p_code_v; END
Build the query:
StoredProcedureQuery spq = em.createStoredProcedureQuery("Update_Address_Postal_Code"); spq.registerStoredProcedureParameter("new_p_code_v", String.class, ParameterMode.IN); spq.registerStoredProcedureParameter("old_p_code_v", String.class, ParameterMode.IN);
Execute the query:
spq.setParameter("new_p_code_v", "123 NEW"); spq.setParameter("old_p_code_v", "321 OLD"); int updateCount = spq.executeUpdate();
Alternatively, the user could call the execute method directly (also note the parameters can be chained):
spq.setParameter("new_p_code_v", "123 NEW").setParameter("old_p_code_v", "321 OLD").execute(); int updateCount = spq.getUpdateCount();
Stored procedure definition on MySQL:
CREATE PROCEDURE Read_Address_City (address_id_v INTEGER, OUT city_v VARCHAR(255)) BEGIN SELECT CITY INTO city_v FROM ADDRESS WHERE (ADDRESS_ID = address_id_v); END
Build the query:
StoredProcedureQuery query = em.createStoredProcedureQuery("Read_Address_City"); query.registerStoredProcedureParameter("address_id_v", Integer.class, ParameterMode.IN); query.registerStoredProcedureParameter("city_v", String.class, ParameterMode.OUT);
Execute the query:
boolean resultSet = query.setParameter("address_id_v", "1").execute(); if (resultSet) { // Result sets must be processed first through getResultList() calls. } // Once the result sets and update counts have been processed, output parameters are available for processing. String city = (String) query.getOutputParameterValue("city_v");
Stored procedure definition on Oracle:
CREATE PROCEDURE Read_Using_Sys_Cursor (f_name_v VARCHAR2, p_recordset OUT SYS_REFCURSOR) AS BEGIN OPEN p_recordset FOR SELECT EMP_ID, F_NAME FROM EMPLOYEE WHERE F_NAME = f_name_v ORDER BY EMP_ID; END;
Build the query:
StoredProcedureQuery query = em.createStoredProcedureQuery("Read_Using_Sys_Cursor", Employee.class); query.registerStoredProcedureParameter("f_name_v", String.class, ParameterMode.IN); query.registerStoredProcedureParameter("p_recordset", void.class, ParameterMode.REF_CURSOR);
Execute the query:
query.setParameter("f_name_v", "Fred"); boolean execute = query.execute(); Listemployees = (List ) query.getOutputParameterValue("p_recordset");
Named stored procedures are those that are specified through metadata and uniquely identified by name.
Stored procedure definition on MySQL:
CREATE PROCEDURE Read_Address (address_id_v INTEGER) BEGIN SELECT ADDRESS_ID, STREET, CITY, COUNTRY, PROVINCE, P_CODE FROM ADDRESS WHERE (ADDRESS_ID = address_id_v); END
Annotation Example
@NamedStoredProcedureQuery( name = "ReadAddressByID", resultClasses = Address.class, procedureName = "Read_Address", parameters = { @StoredProcedureParameter(mode=IN, name="address_id_v", type=Integer.class) } ) public Address() { .... }
XML Example
Address
Execution
EntityManager em = createEntityManager(); em.createNamedStoredProcedureQuery("ReadAddressByID").setParameter("address_id_v", 1).getSingleResult();
Stored procedure definition on MySQL:
CREATE PROCEDURE Read_Multiple_Result_Sets () BEGIN SELECT E.*, S.* FROM EMPLOYEE E, SALARY S WHERE E.EMP_ID = S.EMP_ID; SELECT A.* FROM ADDRESS A; SELECT (t1.BUDGET/t0.PROJ_ID) AS BUDGET_SUM, t0.PROJ_ID, t0.PROJ_TYPE, t0.PROJ_NAME, t0.DESCRIP, t0.LEADER_ID, t0.VERSION, t1.BUDGET, t2.PROJ_ID AS SMALL_ID, t2.PROJ_TYPE AS SMALL_DESCRIM, t2.PROJ_NAME AS SMALL_NAME, t2.DESCRIP AS SMALL_DESCRIPTION, t2.LEADER_ID AS SMALL_TEAMLEAD, t2.VERSION AS SMALL_VERSION FROM PROJECT t0, PROJECT t2, LPROJECT t1 WHERE t1.PROJ_ID = t0.PROJ_ID AND t2.PROJ_TYPE='S'; SELECT t0.EMP_ID, t0.F_NAME, t0.L_NAME, COUNT(t2.DESCRIPTION) AS R_COUNT FROM EMPLOYEE t0, RESPONS t2, SALARY t1 WHERE ((t1.EMP_ID = t0.EMP_ID) AND (t2.EMP_ID = t0.EMP_ID)) GROUP BY t0.EMP_ID, t0.F_NAME, t0.L_NAME; END
Build the query:
This is one example (of many) on how to configure such a query. Queries and result set mappings can be defined solely in annotations or xml or a mix of both. All the metadata can be defined on a single class or split up across many.
@NamedStoredProcedureQuery( name="ReadUsingMultipleResultSetMappings", procedureName="Read_Multiple_Result_Sets", resultSetMappings={"EmployeeResultSetMapping", "AddressResultSetMapping", "ProjectResultSetMapping", "EmployeeConstructorResultSetMapping"} ) @SqlResultSetMappings({ @SqlResultSetMapping( name = "EmployeeResultSetMapping", entities = { @EntityResult(entityClass = Employee.class) } ), @SqlResultSetMapping( name = "EmployeeConstructorResultSetMapping", classes = { @ConstructorResult( targetClass = EmployeeDetails.class, columns = { @ColumnResult(name="EMP_ID", type=Integer.class), @ColumnResult(name="F_NAME", type=String.class), @ColumnResult(name="L_NAME", type=String.class), @ColumnResult(name="R_COUNT", type=Integer.class) } ) } ) }) public Employee(){ .... }
@SqlResultSetMapping( name = "ProjectResultSetMapping", columns = { @ColumnResult(name = "BUDGET_SUM") }, entities = { @EntityResult( entityClass = Project.class ), @EntityResult( entityClass = SmallProject.class, fields = { @FieldResult(name = "id", column = "SMALL_ID"), @FieldResult(name = "name", column = "SMALL_NAME"), @FieldResult(name = "description", column = "SMALL_DESCRIPTION"), @FieldResult(name = "teamLeader", column = "SMALL_TEAMLEAD"), @FieldResult(name = "version", column = "SMALL_VERSION") }, discriminatorColumn="SMALL_DESCRIM" ) } ) public Project() { .... }
@SqlResultSetMapping( name = "AddressResultSetMapping", entities = { @EntityResult(entityClass = Address.class) } ) public Address() { .... }
Execute the query:
StoredProcedureQuery spq = createEntityManager().createNamedStoredProcedureQuery("ReadUsingMultipleResultSetMappings"); // Read the first result set mapping --> Employee List employeeResults = spq.getResultList(); // Read second result set mapping --> Address assertTrue("Address results not available", spq .hasMoreResults()); List addressResults = spq.getResultList(); // Read third result set mapping --> Project assertTrue("Projects results not available", spq .hasMoreResults()); List projectResults = spq.getResultList(); // Read fourth result set mapping --> Employee Constructor Result assertTrue("Employee constructor results not available", spq .hasMoreResults()); List employeeConstructorResults = spq.getResultList(); // Verify there as no more results available assertFalse("More results available", spq.hasMoreResults());
The SQL spec and many databases have SQL functions that are not covered by the JPA specification. With the latest JPAspecification the ability to call generic SQL functions was added to the JPQL syntax. The function keyword may be used to invoke predefined functions or used defined functions.
SELECT e FROM Employee e WHERE FUNCTION(‘isLongTermEmployee’, e.startDate)
Entity Listeners now support the Contexts and Dependency Injection API (CDI) when used inside a Java EE container. This support allows entity listeners to use CDI to inject objects and also provides support for @PostConstruct and @PreDestroy method calls.
The following example shows how a SessionBean can be injected into an EntityListener
public class LoggerEntityListener { @EJB protected LoggerBean logger; @PrePersist public void prePersist(Object object) { logger.log("prepersist", object); } @PostPersist public void postPersist(Object object){ logger.log("postpersist", object); } @PreDestroy public void preDestroy(){ logger.close(); } @PostConstruct public void postConstruct(){ logger.initialize(); } }
@Entity @EntityListeners({LoggerEntityListener.class}) public class MyLoggedEntity { ... }
Allows path expressions to be treated as a subclass, giving access to subclass specific state.
The following Java Persistence query language statement returns all Persons with a car that is a SportsCar with a maxSpeed of 200
"Select p from Person p join fetch p.car join treat(p.car as SportsCar) s where s.maxSpeed = 200"
The following statement returns all maxSpeed values from referenced sportsCar instances
Select s.maxSpeed from Person p join treat(p.car as SportsCar) s
The following is equivalent to the first JPQL example above, returning Persions with a sportsCar having a maxSpeed of 200.
CriteriaBuilder qb = em.getCriteriaBuilder(); CriteriaQuerycq = qb.createQuery(Person.class); Root root = cq.from(Person.class); root.fetch("car"); Join s = qb.treat(root.join("car"), SportsCar.class); cq.where(qb.equal(s.get("maxSpeed"), 200));
The following is equivelent to the second JPQL example, returning all maxSpeed values from referenced sportsCar instances
CriteriaBuilder qb = em.getCriteriaBuilder(); CriteriaQuerycq = qb.createQuery(Person.class); Root root = cq.from(Person.class); Join s = qb.treat(root.join("car"), SportsCar.class); cq.select(s.get("maxSpeed"));
Provides control over the conversion from an attribute type and value to the corresponding database type and value
Users must first define a class to the a converter. To do so, the class must implement the javax.persistence.AttributeConverter interface.
public interface AttributeConverter{ /** * Converts the value stored in the entity attribute into the * data representation to be stored in the database. * * @param attribute the entity attribute value to be converted * @return the converted data to be stored in the database * column */ public Y convertToDatabaseColumn (X attribute); /** * Converts the data stored in the database column into the * value to be stored in the entity attribute. * Note that it is the responsibility of the converter writer to * specify the correct dbData
type for the corresponding * column for use by the JDBC driver: i.e., persistence providers are * not expected to do such type conversion. * * @param dbData the data from the database column to be * converted * @return the converted value to be stored in the entity * attribute */ public X convertToEntityAttribute (Y dbData); }
The class must be then marked as a Converter class through an annotation or xml declaration.
@Converter(autoApply=true) public class LongToStringConverter implements AttributeConverter{ @Override public String convertToDatabaseColumn(Long attribute) { return (attribute == null) ? null : attribute.toString(); } @Override public Long convertToEntityAttribute(String dbData) { return (dbData == null) ? null : new Long(dbData); } }
Note the auto apply flag set to true. This flag indicates that this converter should be applied to every attribute within the persistence unit of type Long. Without this setting, converters must be explicitly set with a Convert specification. E.g.
@Convert(converter = LongToStringConverter.class) protected Long salary;
Alternatively, an auto apply setting can also be turned using the Convert metadata for individuals attributes that wish to not use the Converter.
Annotation example:
@Convert(disableConversion = true) protected Long salary;
XML example:
Converters can be applied at several levels. The simplest, described above, allows a single converter on a single attribute. Multiple converters can be applied through the use of Converts metadata which is of value for embedded attributes and when converters are needed on a mapped superclass on a per entity basis.
@Embedded @Converts({ @Convert(attributeName = "level", converter = LevelConverter.class), @Convert(attributeName = "health", converter = HealthConverter.class), @Convert(attributeName = "status.runningStatus", converter = RunningStatusConverter.class) }) protected RunnerInfo info;
Notice the multiple convert for different attributes of the embedded. The dot notation also allows you to traverse nested embedded mappings as needed.
Converters can also be applied to Embeddable keys of "to-many" mappings. For example:
@OneToMany(mappedBy="race") @Converts({ // Add this convert to avoid the auto apply setting to a Long. @Convert(attributeName="key.uniqueIdentifier", disableConversion=true), @Convert(attributeName="key.description", converter=ResponsibilityConverter.class) }) protected Maporganizers; @Embeddable public class Responsibility { public Long uniqueIdentifier; public String description; ... }
Converters may also be applied to element collections (key and value) as needed. For example:
@ElementCollection @Converts({ @Convert(attributeName="key", converter = DistanceConverter.class), @Convert(converter = TimeConverter.class) }) protected MappersonalBests;
Here the DistanceConverter is applied to the keys of the personalBests map and the TimeConverter to the values of the map.
Converters may also be applied at the class level to connect converters to attributes of a mapped superclass from a subclass Entity. For example:
@Entity @Converts({ @Convert(attributeName = "accomplishments.key", converter = AccomplishmentConverter.class), @Convert(attributeName = "accomplishments", converter = DateConverter.class), @Convert(attributeName = "age", converter = AgeConverter.class) }) public class Runner extends Athlete { ... } @MappedSuperclass public class Athlete { protected Integer age; @ElementCollection // Sub class (Runner) will add convert to both key and value protected Mapaccomplishments; ... }
In previous versions for JPA, although DDL generation was present it was not standardized or required. JPA 2.1 has addedstandardized provider DDL generation and made DDL generation a requirement.
A summary of the enabling DDL properties are as follows:
Any combination of those properties can be passed through a Persistence.generateSchema() call, through a Persistence.createEntityManagerFactory() call or directly in the persistence unit definition in the persistence.xml.
org.eclipse.persistence.jpa.PersistenceProvider META-INF/advanced-ddl-orm.xml true
Map properties = new HashMap(); properties.put("javax.persistence.database-product-name", "Oracle"); properties.put("javax.persistence.database-major-version", 12); properties.put("javax.persistence.database-minor-version", 1); properties.put("javax.persistence.schema-generation.scripts.action", "drop-and-create"); properties.put("javax.persistence.schema-generation.scripts.drop-target", "jpa21-generate-schema-no-connection-drop.jdbc"); properties.put("javax.persistence.schema-generation.scripts.create-target", "jpa21-generate-schema-no-connection-create.jdbc"); Persistence.generateSchema("default", properties);
NOTE: All the JPA DDL persistence unit properties are available statically from org.eclipse.persistence.config.PersistenceUnitProperties.SCHEMA_GENERATION*
Map properties = new HashMap(); properties.put(PersistenceUnitProperties.SCHEMA_GENERATION_DATABASE_ACTION, PersistenceUnitProperties.SCHEMA_GENERATION_DROP_AND_CREATE_ACTION); properties.put(PersistenceUnitProperties.SCHEMA_GENERATION_CREATE_SOURCE, PersistenceUnitProperties.SCHEMA_GENERATION_SCRIPT_SOURCE); properties.put(PersistenceUnitProperties.SCHEMA_GENERATION_CREATE_SCRIPT_SOURCE, new FileReader(new File(createSource))); properties.put(PersistenceUnitProperties.SCHEMA_GENERATION_DROP_SOURCE, PersistenceUnitProperties.SCHEMA_GENERATION_SCRIPT_SOURCE); properties.put(PersistenceUnitProperties.SCHEMA_GENERATION_DROP_SCRIPT_SOURCE, new FileReader(new File(dropSource))); Persistence.createEntityManagerFactory("default", properties);
Dynamic queries can now be added to the Peristence Unit through the EntityManagerFactory.addNamedQuery(String name, Query query) as named queries and can be retrieved through EntityManager.createNamedQuery(...). Configuration elements of the query like query hints, flush mode, lock mode, etc. are retained in the named query as configured at the point of adding the named query but parameter values are not retained. If a named query of the same name is already registered it will be replaced by the newly added query.Once retrieved any configuration changes to a named query will not be reflected in subsequent retrievals unless the named query is updated through addNamedQuery(...)
{ Query query = em.createQuery("Select e from Employee e where e.firstName = :p1 order by e.id"); query.setParameter("p1", name); query.setMaxResult(15); factory.addNamedQuery("Select_Employee_by_first_name", query); em = factory.createEntityManager(); Query namedQuery = em.createNamedQuery("Select_Employee_by_first_name"); assertFalse(namedQuery.isBound(namedQuery.getParameter("p1"))); // <- parameter value not retained assertTrue(namedQuery.getMaxResult() == 15);// <- max result configuration retained namedQuery.setMaxResult(10); //configuration updates to retrieved queries are not reflected in the named query unless the new config is re-added Query namedQuery = em.createNamedQuery("Select_Employee_by_first_name"); assertTrue(namedQuery.getMaxResult() == 15); }
Entity graphs are a means to specify the structure of a graph of entities using entity model metadata. This entity graph consists of representations of attributes and in the case of multi-node entity graphs additional entity graphs to represent the related entities. An entity graph can be specified through annotations:
@NamedEntityGraph( name="ExecutiveProjects", attributeNodes={ @NamedAttributeNode("address"), @NamedAttributeNode(value="projects", subgraph="projects") }, subgraphs={ @NamedSubgraph( name="projects", attributeNodes={@NamedAttributeNode("properties")} ), @NamedSubgraph( name="projects", type=LargeProject.class, attributeNodes={@NamedAttributeNode("executive")} ) } )
and later retrieved by name:
EntityGraph employeeGraph = em.getEntityGraph("ExecutiveProjects");
Entity graphs can be created dynamically from scratch:
EntityGraph employeeGraph = em.createEntityGraph(Employee.class); employeeGraph.addAttributeNodes("address"); employeeGraph.addSubgraph("projects").addAttributeNodes("properties"); employeeGraph.addSubgraph("projects", LargeProject.class).addAttributeNodes("executive");
or created from an existing named entity graph:
EntityGraph employeeGraph = em.createEntityGraph("ExecutiveProjects"); employeeGraph.addSubgraph("period").addAttributeNodes("startDate");
Once constructed or retrieved the entity graphs can then be used as templates for certain EntityManager operations like load and fetch. For instance applying the entity graph as a fetch graph through a query hint will cause EclipseLink to only load those attributes present in the entity graph and unlisted attributes would become fetchType=LAZY.
EntityGraph employeeGraph = em.getEntityGraph("ExecutiveProjects"); Employee result = (Employee) em.createQuery("Select e from Employee e").setHint("javax.persistence.fetchgraph", employeeGraph).getResultList().get(0); PersistenceUnitUtil util = em.getEntityManagerFactory().getPersistenceUnitUtil(); assertFalse(util.isLoaded(result, "firstName")); assertFalse(util.isLoaded(result, "department")); assertTrue(util.isLoaded(result, "projects"));
The entity graph can also be used to force and entity subgraph to be loaded at query time with the query hint "javax.persistence.loadgraph" . When a load graph is applied all listed attributes will be loaded by the query and any unlisted attributes will be loaded based on their mapping fetchType settings.
EntityGraph employeeGraph = em.getEntityGraph("ExecutiveProjects"); Employee result = (Employee) em.createQuery("Select e from Employee e").setHint("javax.persistence.loadgraph", employeeGraph).getResultList().get(0); PersistenceUnitUtil util = em.getEntityManagerFactory().getPersistenceUnitUtil(); assertTrue(util.isLoaded(result, "firstName")); assertFalse(util.isLoaded(result, "department")); assertTrue(util.isLoaded(result, "projects"));
When working with JTA EntityManagers in previous versions of the Java Persistence API the entity manager and persistence context was automatically synchronized with the transaction and any changes to entities managed by the persistence context would be written to the database when the transaction was committed. With Java Persistence API 2.1 a new synchronization type @PersistenceContext(synchronization=SynchronizationType.UNSYNCHRONIZED) has been added to the persistence context to allow it to propagate along with the active JTA transaction but not be synchronized to it. For any unsynchronized persistence context the changes within the managed entities will not be written to the database unless the persistence context has been explicitly joined to the transaction by the application through the EntityManager.joinTransaction() call. This can be useful when an application may need access to transactional resources but does not wish to write to the database until some later point perhaps multiple transactions have completed.