Map接口容器存放的是key-value对,由于Map是按key索引的,因此 key 是不可重复的,但 value 允许重复。 下面简单介绍一下Map接口的实现,包括HashMap,LinkedHashMap,WeakHashMap,Hashtable,IdentityHashMap和TreeMap.需要注意的是,Map接口并没有继承Collection接口!
1、HashMap
HashMap 继承于AbstractMap,实现了Cloneable、java.io.Serializable接口,而AbstractMap实现了Map接口。HashMap 的实现不是同步的,这意味着它不是线程安全的。它的key、value都可以为null.此外,HashMap依赖key哈希值,所以其不是有序的,其底层使用数组(元素为链表,根据元素的hashcode确定数组下表)来存储数据。
public class HashMap
extends AbstractMap
implements Map, Cloneable, Serializable
{
public abstract class AbstractMap implements Map {
/**
* Sole constructor. (For invocation by subclass constructors, typically
* implicit.)
*/
protected AbstractMap() {
}
public interface Map {
// Query Operations
/**
* Returns the number of key-value mappings in this map. If the
* map contains more than Integer.MAX_VALUE elements, returns
* Integer.MAX_VALUE.
*
* @return the number of key-value mappings in this map
*/
int size();
public class HashMap
extends AbstractMap
implements Map, Cloneable, Serializable
{
/**
* The default initial capacity - MUST be a power of two.
*/
static final int DEFAULT_INITIAL_CAPACITY = 16;
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1《30.
*/
static final int MAXIMUM_CAPACITY = 1 《 30;
/**
* The load factor used when none specified in constructor.
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The table, resized as necessary. Length MUST Always be a power of two.
*/
transient Entry[] table;
/**
* The number of key-value mappings contained in this map.
*/
transient int size;
/**
* The next size value at which to resize (capacity * load factor)。
* @serial
*/
int threshold;
/**
* The load factor for the hash table.
*
* @serial
*/
final float loadFactor;
/**
* The number of times this HashMap has been structurally modified
* Structural modifications are those that change the number of mappings in
* the HashMap or otherwise modify its internal structure (e.g.,
* rehash)。 This field is used to make iterators on Collection-views of
* the HashMap fail-fast. (See ConcurrentModificationException)。
*/
transient volatile int modCount;
/**
* Constructs an empty HashMap with the specified initial
* capacity and load factor.
*
* @param initialCapacity the initial capacity
* @param loadFactor the load factor
* @throws IllegalArgumentException if the initial capacity is negative
* or the load factor is nonpositive
*/
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
// Find a power of 2 >= initialCapacity
int capacity = 1;
while (capacity < initialCapacity)
capacity 《= 1;
this.loadFactor = loadFactor;
threshold = (int)(capacity * loadFactor);
table = new Entry[capacity];
init();
}
static class Entry implements Map.Entry {
final K key;
V value;
Entry next;
final int hash;
/**
* Creates new entry.
*/
Entry(int h, K k, V v, Entry n) {
value = v;
next = n;
key = k;
hash = h;
}
HashMap 的实例有两个参数影响其性能:"初始容量" 和 "加载因子".容量是哈希表中桶的数量,初始容量只是哈希表在创建时的容量。加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表进行resize操作(即重建内部数据结构),从而哈希表将具有大约两倍的桶数。通常,默认加载因子是 0.75, 这是在时间和空间成本上寻求一种折衷。
public V put(K key, V value) {
if (key == null)
return putForNullKey(value);
int hash = hash(key.hashCode());
int i = indexFor(hash, table.length);
for (Entry e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
addEntry(hash, key, value, i);
return null;
}
void addEntry(int hash, K key, V value, int bucketIndex) {
Entry e = table[bucketIndex];
table[bucketIndex] = new Entry(hash, key, value, e); // 在链表头加上此Entry
if (size++ >= threshold)
resize(2 * table.length);
}
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
}
Entry[] newTable = new Entry[newCapacity];
transfer(newTable);
table = newTable;
threshold = (int)(newCapacity * loadFactor);
}
2、LinkedHashMap
LinkedHashMap继承与HashMap,与HashMap相比LinkedHashMap维护的是一个具有双重链表的HashMap,LinkedHashMap支持排序,输出时其元素是有顺序的,先插入的先遍历到(对链表的遍历都是使用这个双向链表),而HashMap输出时是随机的。如果Map映射比较复杂而又要求高效率的话,最好使用LinkedHashMap.同样,LinkedHashMap也是非线程安全的。
友情提示一下,可以利用LinkedHashMap很方便的实现LRU算法,主要是重写boolean removeEldestEntry(Map.Entry eldest)方法。如果应该从映射移除最旧的条目,则返回 true;如果应该保留,则返回 false.
public class LinkedHashMap
extends HashMap
implements Map
{
private static final long serialVersionUID = 3801124242820219131L;
/**
* The head of the doubly linked list.
*/
private transient Entry header; // 较HashMap多维护的一条双向链表
/**
* This override alters behavior of superclass put method. It causes newly
* allocated entry to get inserted at the end of the linked list and
* removes the eldest entry if appropriate.
*/
void addEntry(int hash, K key, V value, int bucketIndex) {
createEntry(hash, key, value, bucketIndex);
// Remove eldest entry if instructed, else grow capacity if appropriate
Entry eldest = header.after;
if (removeEldestEntry(eldest)) {
removeEntryForKey(eldest.key);
} else {
if (size >= threshold)
resize(2 * table.length);
}
}