千家信息网

PostgreSQL中hash_search_with_hash_value函数有什么作用

发表于:2024-10-23 作者:千家信息网编辑
千家信息网最后更新 2024年10月23日,本篇内容主要讲解"PostgreSQL中hash_search_with_hash_value函数有什么作用",感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习"
千家信息网最后更新 2024年10月23日PostgreSQL中hash_search_with_hash_value函数有什么作用

本篇内容主要讲解"PostgreSQL中hash_search_with_hash_value函数有什么作用",感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习"PostgreSQL中hash_search_with_hash_value函数有什么作用"吧!

一、数据结构

BufferDesc
共享缓冲区的共享描述符(状态)数据

/* * Flags for buffer descriptors * buffer描述器标记 * * Note: TAG_VALID essentially means that there is a buffer hashtable * entry associated with the buffer's tag. * 注意:TAG_VALID本质上意味着有一个与缓冲区的标记相关联的缓冲区散列表条目。 *///buffer header锁定#define BM_LOCKED               (1U << 22)  /* buffer header is locked *///数据需要写入(标记为DIRTY)#define BM_DIRTY                (1U << 23)  /* data needs writing *///数据是有效的#define BM_VALID                (1U << 24)  /* data is valid *///已分配buffer tag#define BM_TAG_VALID            (1U << 25)  /* tag is assigned *///正在R/W#define BM_IO_IN_PROGRESS       (1U << 26)  /* read or write in progress *///上一个I/O出现错误#define BM_IO_ERROR             (1U << 27)  /* previous I/O failed *///开始写则变DIRTY#define BM_JUST_DIRTIED         (1U << 28)  /* dirtied since write started *///存在等待sole pin的其他进程#define BM_PIN_COUNT_WAITER     (1U << 29)  /* have waiter for sole pin *///checkpoint发生,必须刷到磁盘上#define BM_CHECKPOINT_NEEDED    (1U << 30)  /* must write for checkpoint *///持久化buffer(不是unlogged或者初始化fork)#define BM_PERMANENT            (1U << 31)  /* permanent buffer (not unlogged,                                             * or init fork) *//* *  BufferDesc -- shared descriptor/state data for a single shared buffer. *  BufferDesc -- 共享缓冲区的共享描述符(状态)数据 * * Note: Buffer header lock (BM_LOCKED flag) must be held to examine or change * the tag, state or wait_backend_pid fields.  In general, buffer header lock * is a spinlock which is combined with flags, refcount and usagecount into * single atomic variable.  This layout allow us to do some operations in a * single atomic operation, without actually acquiring and releasing spinlock; * for instance, increase or decrease refcount.  buf_id field never changes * after initialization, so does not need locking.  freeNext is protected by * the buffer_strategy_lock not buffer header lock.  The LWLock can take care * of itself.  The buffer header lock is *not* used to control access to the * data in the buffer! * 注意:必须持有Buffer header锁(BM_LOCKED标记)才能检查或修改tag/state/wait_backend_pid字段. * 通常来说,buffer header lock是spinlock,它与标记位/参考计数/使用计数组合到单个原子变量中. * 这个布局设计允许我们执行原子操作,而不需要实际获得或者释放spinlock(比如,增加或者减少参考计数). * buf_id字段在初始化后不会出现变化,因此不需要锁定. * freeNext通过buffer_strategy_lock锁而不是buffer header lock保护. * LWLock可以很好的处理自己的状态. * 务请注意的是:buffer header lock不用于控制buffer中的数据访问! * * It's assumed that nobody changes the state field while buffer header lock * is held.  Thus buffer header lock holder can do complex updates of the * state variable in single write, simultaneously with lock release (cleaning * BM_LOCKED flag).  On the other hand, updating of state without holding * buffer header lock is restricted to CAS, which insure that BM_LOCKED flag * is not set.  Atomic increment/decrement, OR/AND etc. are not allowed. * 假定在持有buffer header lock的情况下,没有人改变状态字段. * 持有buffer header lock的进程可以执行在单个写操作中执行复杂的状态变量更新, *   同步的释放锁(清除BM_LOCKED标记). * 换句话说,如果没有持有buffer header lock的状态更新,会受限于CAS, *   这种情况下确保BM_LOCKED没有被设置. * 比如原子的增加/减少(AND/OR)等操作是不允许的. * * An exception is that if we have the buffer pinned, its tag can't change * underneath us, so we can examine the tag without locking the buffer header. * Also, in places we do one-time reads of the flags without bothering to * lock the buffer header; this is generally for situations where we don't * expect the flag bit being tested to be changing. * 一种例外情况是如果我们已有buffer pinned,该buffer的tag不能改变(在本进程之下), *   因此不需要锁定buffer header就可以检查tag了. * 同时,在执行一次性的flags读取时不需要锁定buffer header. * 这种情况通常用于我们不希望正在测试的flag bit将被改变. * * We can't physically remove items from a disk page if another backend has * the buffer pinned.  Hence, a backend may need to wait for all other pins * to go away.  This is signaled by storing its own PID into * wait_backend_pid and setting flag bit BM_PIN_COUNT_WAITER.  At present, * there can be only one such waiter per buffer. * 如果其他进程有buffer pinned,那么进程不能物理的从磁盘页面中删除items. * 因此,后台进程需要等待其他pins清除.这可以通过存储它自己的PID到wait_backend_pid中, *   并设置标记位BM_PIN_COUNT_WAITER. * 目前,每个缓冲区只能由一个等待进程. * * We use this same struct for local buffer headers, but the locks are not * used and not all of the flag bits are useful either. To avoid unnecessary * overhead, manipulations of the state field should be done without actual * atomic operations (i.e. only pg_atomic_read_u32() and * pg_atomic_unlocked_write_u32()). * 本地缓冲头部使用同样的结构,但并不需要使用locks,而且并不是所有的标记位都使用. * 为了避免不必要的负载,状态域的维护不需要实际的原子操作 * (比如只有pg_atomic_read_u32() and pg_atomic_unlocked_write_u32()) * * Be careful to avoid increasing the size of the struct when adding or * reordering members.  Keeping it below 64 bytes (the most common CPU * cache line size) is fairly important for performance. * 在增加或者记录成员变量时,小心避免增加结构体的大小. * 保持结构体大小在64字节内(通常的CPU缓存线大小)对于性能是非常重要的. */typedef struct BufferDesc{    //buffer tag    BufferTag   tag;            /* ID of page contained in buffer */    //buffer索引编号(0开始),指向相应的buffer pool slot    int         buf_id;         /* buffer's index number (from 0) */    /* state of the tag, containing flags, refcount and usagecount */    //tag状态,包括flags/refcount和usagecount    pg_atomic_uint32 state;    //pin-count等待进程ID    int         wait_backend_pid;   /* backend PID of pin-count waiter */    //空闲链表链中下一个空闲的buffer    int         freeNext;       /* link in freelist chain */    //缓冲区内容锁    LWLock      content_lock;   /* to lock access to buffer contents */} BufferDesc;

BufferTag
Buffer tag标记了buffer存储的是磁盘中哪个block

/* * Buffer tag identifies which disk block the buffer contains. * Buffer tag标记了buffer存储的是磁盘中哪个block * * Note: the BufferTag data must be sufficient to determine where to write the * block, without reference to pg_class or pg_tablespace entries.  It's * possible that the backend flushing the buffer doesn't even believe the * relation is visible yet (its xact may have started before the xact that * created the rel).  The storage manager must be able to cope anyway. * 注意:BufferTag必须足以确定如何写block而不需要参照pg_class或者pg_tablespace数据字典信息. * 有可能后台进程在刷新缓冲区的时候深圳不相信关系是可见的(事务可能在创建rel的事务之前). * 存储管理器必须可以处理这些事情. * * Note: if there's any pad bytes in the struct, INIT_BUFFERTAG will have * to be fixed to zero them, since this struct is used as a hash key. * 注意:如果在结构体中有填充的字节,INIT_BUFFERTAG必须将它们固定为零,因为这个结构体用作散列键. */typedef struct buftag{    //物理relation标识符    RelFileNode rnode;          /* physical relation identifier */    ForkNumber  forkNum;    //相对于relation起始的块号    BlockNumber blockNum;       /* blknum relative to begin of reln */} BufferTag;

HTAB
哈希表的顶层控制结构.

/* * Top control structure for a hashtable --- in a shared table, each backend * has its own copy (OK since no fields change at runtime) * 哈希表的顶层控制结构. * 在这个共享哈希表中,每一个后台进程都有自己的拷贝 * (之所以没有问题是因为fork出来后,在运行期没有字段会变化) */struct HTAB{    //指向共享的控制信息    HASHHDR    *hctl;           /* => shared control information */    //段开始目录    HASHSEGMENT *dir;           /* directory of segment starts */    //哈希函数    HashValueFunc hash;         /* hash function */    //哈希键比较函数    HashCompareFunc match;      /* key comparison function */    //哈希键拷贝函数    HashCopyFunc keycopy;       /* key copying function */    //内存分配器    HashAllocFunc alloc;        /* memory allocator */    //内存上下文    MemoryContext hcxt;         /* memory context if default allocator used */    //表名(用于错误信息)    char       *tabname;        /* table name (for error messages) */    //如在共享内存中,则为T    bool        isshared;       /* true if table is in shared memory */    //如为T,则固定大小不能扩展    bool        isfixed;        /* if true, don't enlarge */    /* freezing a shared table isn't allowed, so we can keep state here */    //不允许冻结共享表,因此这里会保存相关状态    bool        frozen;         /* true = no more inserts allowed */    /* We keep local copies of these fixed values to reduce contention */    //保存这些固定值的本地拷贝,以减少冲突    //哈希键长度(以字节为单位)    Size        keysize;        /* hash key length in bytes */    //段大小,必须为2的幂    long        ssize;          /* segment size --- must be power of 2 */    //段偏移,ssize的对数    int         sshift;         /* segment shift = log2(ssize) */};/* * Header structure for a hash table --- contains all changeable info * 哈希表的头部结构 -- 存储所有可变信息 * * In a shared-memory hash table, the HASHHDR is in shared memory, while * each backend has a local HTAB struct.  For a non-shared table, there isn't * any functional difference between HASHHDR and HTAB, but we separate them * anyway to share code between shared and non-shared tables. * 在共享内存哈希表中,HASHHDR位于共享内存中,每一个后台进程都有一个本地HTAB结构. * 对于非共享哈希表,HASHHDR和HTAB没有任何功能性的不同, * 但无论如何,我们还是把它们区分为共享和非共享表. */struct HASHHDR{    /*     * The freelist can become a point of contention in high-concurrency hash     * tables, so we use an array of freelists, each with its own mutex and     * nentries count, instead of just a single one.  Although the freelists     * normally operate independently, we will scavenge entries from freelists     * other than a hashcode's default freelist when necessary.     * 在高并发的哈希表中,空闲链表会成为竞争热点,因此我们使用空闲链表数组,     *   数组中的每一个元素都有自己的mutex和条目统计,而不是使用一个.     *     * If the hash table is not partitioned, only freeList[0] is used and its     * spinlock is not used at all; callers' locking is assumed sufficient.     * 如果哈希表没有分区,那么只有freelist[0]元素是有用的,自旋锁没有任何用处;     * 调用者锁定被认为已足够OK.     */    FreeListData freeList[NUM_FREELISTS];    /* These fields can change, but not in a partitioned table */    //这些域字段可以改变,但不适用于分区表    /* Also, dsize can't change in a shared table, even if unpartitioned */    //同时,就算是非分区表,共享表的dsize也不能改变    //目录大小    long        dsize;          /* directory size */    //已分配的段大小(<= dbsize)    long        nsegs;          /* number of allocated segments (<= dsize) */    //正在使用的最大桶ID    uint32      max_bucket;     /* ID of maximum bucket in use */    //进入整个哈希表的模掩码    uint32      high_mask;      /* mask to modulo into entire table */    //进入低于半个哈希表的模掩码    uint32      low_mask;       /* mask to modulo into lower half of table */    /* These fields are fixed at hashtable creation */    //下面这些字段在哈希表创建时已固定    //哈希键大小(以字节为单位)    Size        keysize;        /* hash key length in bytes */    //所有用户元素大小(以字节为单位)    Size        entrysize;      /* total user element size in bytes */    //分区个数(2的幂),或者为0    long        num_partitions; /* # partitions (must be power of 2), or 0 */    //目标的填充因子    long        ffactor;        /* target fill factor */    //如目录是固定大小,则该值为dsize的上限值    long        max_dsize;      /* 'dsize' limit if directory is fixed size */    //段大小,必须是2的幂    long        ssize;          /* segment size --- must be power of 2 */    //端偏移,ssize的对数    int         sshift;         /* segment shift = log2(ssize) */    //一次性分配的条目个数    int         nelem_alloc;    /* number of entries to allocate at once */#ifdef HASH_STATISTICS    /*     * Count statistics here.  NB: stats code doesn't bother with mutex, so     * counts could be corrupted a bit in a partitioned table.     * 统计信息.     * 注意:统计相关的代码不会影响mutex,因此对于分区表,统计可能有一点点问题     */    long        accesses;    long        collisions;#endif};/* * HASHELEMENT is the private part of a hashtable entry.  The caller's data * follows the HASHELEMENT structure (on a MAXALIGN'd boundary).  The hash key * is expected to be at the start of the caller's hash entry data structure. * HASHELEMENT是哈希表条目的私有部分. * 调用者的数据按照HASHELEMENT结构组织(位于MAXALIGN的边界). * 哈希键应位于调用者hash条目数据结构的开始位置. */typedef struct HASHELEMENT{    //链接到相同桶中的下一个条目    struct HASHELEMENT *link;   /* link to next entry in same bucket */    //该条目的哈希函数结果    uint32      hashvalue;      /* hash function result for this entry */} HASHELEMENT;/* Hash table header struct is an opaque type known only within dynahash.c *///哈希表头部结构,非透明类型,用于dynahash.ctypedef struct HASHHDR HASHHDR;/* Hash table control struct is an opaque type known only within dynahash.c *///哈希表控制结构,非透明类型,用于dynahash.ctypedef struct HTAB HTAB;/* Parameter data structure for hash_create *///hash_create使用的参数数据结构/* Only those fields indicated by hash_flags need be set *///根据hash_flags标记设置相应的字段typedef struct HASHCTL{    //分区个数(必须是2的幂)    long        num_partitions; /* # partitions (must be power of 2) */    //段大小    long        ssize;          /* segment size */    //初始化目录大小    long        dsize;          /* (initial) directory size */    //dsize上限    long        max_dsize;      /* limit to dsize if dir size is limited */    //填充因子    long        ffactor;        /* fill factor */    //哈希键大小(字节为单位)    Size        keysize;        /* hash key length in bytes */    //参见上述数据结构注释    Size        entrysize;      /* total user element size in bytes */    //    HashValueFunc hash;         /* hash function */    HashCompareFunc match;      /* key comparison function */    HashCopyFunc keycopy;       /* key copying function */    HashAllocFunc alloc;        /* memory allocator */    MemoryContext hcxt;         /* memory context to use for allocations */    //共享内存中的哈希头部结构地址    HASHHDR    *hctl;           /* location of header in shared mem */} HASHCTL;/* A hash bucket is a linked list of HASHELEMENTs *///哈希桶是HASHELEMENTs链表typedef HASHELEMENT *HASHBUCKET;/* A hash segment is an array of bucket headers *///hash segment是桶数组typedef HASHBUCKET *HASHSEGMENT;/* * Hash functions must have this signature. * Hash函数必须有它自己的标识 */typedef uint32 (*HashValueFunc) (const void *key, Size keysize); /* * Key comparison functions must have this signature.  Comparison functions * return zero for match, nonzero for no match.  (The comparison function * definition is designed to allow memcmp() and strncmp() to be used directly * as key comparison functions.) * 哈希键对比函数必须有自己的标识. * 如匹配则对比函数返回0,不匹配返回非0. * (对比函数定义被设计为允许在对比键值时可直接使用memcmp()和strncmp()) */typedef int (*HashCompareFunc) (const void *key1, const void *key2, Size keysize); /* * Key copying functions must have this signature.  The return value is not * used.  (The definition is set up to allow memcpy() and strlcpy() to be * used directly.) * 键拷贝函数必须有自己的标识. * 返回值无用. */typedef void *(*HashCopyFunc) (void *dest, const void *src, Size keysize);/* * Space allocation function for a hashtable --- designed to match malloc(). * Note: there is no free function API; can't destroy a hashtable unless you * use the default allocator. * 哈希表的恐惧分配函数 -- 被设计为与malloc()函数匹配. * 注意:这里没有释放函数API;不能销毁哈希表,除非使用默认的分配器. */typedef void *(*HashAllocFunc) (Size request);

FreeListData
在一个分区哈希表中,每一个空闲链表与特定的hashcodes集合相关,通过下面的FREELIST_IDX()宏进行定义.
nentries跟踪有这些hashcodes的仍存活的hashtable条目个数.

/* * Per-freelist data. * 空闲链表数据. * * In a partitioned hash table, each freelist is associated with a specific * set of hashcodes, as determined by the FREELIST_IDX() macro below. * nentries tracks the number of live hashtable entries having those hashcodes * (NOT the number of entries in the freelist, as you might expect). * 在一个分区哈希表中,每一个空闲链表与特定的hashcodes集合相关,通过下面的FREELIST_IDX()宏进行定义. * nentries跟踪有这些hashcodes的仍存活的hashtable条目个数. * (注意不要搞错,不是空闲的条目个数) * * The coverage of a freelist might be more or less than one partition, so it * needs its own lock rather than relying on caller locking.  Relying on that * wouldn't work even if the coverage was the same, because of the occasional * need to "borrow" entries from another freelist; see get_hash_entry(). * 空闲链表的覆盖范围可能比一个分区多或少,因此需要自己的锁而不能仅仅依赖调用者的锁. * 依赖调用者锁在覆盖面一样的情况下也不会起效,因为偶尔需要从另一个自由列表"借用"条目,详细参见get_hash_entry() * * Using an array of FreeListData instead of separate arrays of mutexes, * nentries and freeLists helps to reduce sharing of cache lines between * different mutexes. * 使用FreeListData数组而不是一个独立的mutexes,nentries和freelists数组有助于减少不同mutexes之间的缓存线共享. */typedef struct{    //该空闲链表的自旋锁    slock_t     mutex;          /* spinlock for this freelist */    //相关桶中的条目个数    long        nentries;       /* number of entries in associated buckets */    //空闲元素链    HASHELEMENT *freeList;      /* chain of free elements */} FreeListData;

BufferLookupEnt

/* entry for buffer lookup hashtable *///检索hash表的条目typedef struct{    //磁盘page的tag    BufferTag   key;            /* Tag of a disk page */    //相关联的buffer ID    int         id;             /* Associated buffer ID */} BufferLookupEnt;

二、源码解读

hash_search_with_hash_value函数,根据给定tag和buffer ID,插入到哈希表中,如该tag相应的条目已存在,则不处理.
其主要实现逻辑如下:
1.初始化相关变量,如根据hash值获取idx等
2.如action为插入,检查是否需要分裂哈希桶
3.执行相关初始化检索,计算桶号/段号/段内编号等
4.沿着哈希键冲突链搜索匹配键,根据搜索结果给foundPtr赋值
5.根据输入action执行相应的逻辑
5.1HASH_FIND,搜索
5.2HASH_REMOVE,移除
5.3HASH_ENTER_NULL,验证分配器,转至HASH_ENTER
5.4HASH_ENTER,如找到,则返回现存的元素,否则创建一个
6.找不到,则报错,返回NULL

void *hash_search_with_hash_value(HTAB *hashp,                            const void *keyPtr,                            uint32 hashvalue,                            HASHACTION action,                            bool *foundPtr){    HASHHDR    *hctl = hashp->hctl;//获取HASHHDR    int         freelist_idx = FREELIST_IDX(hctl, hashvalue);//根据hash值获取idx    Size        keysize;//键值大小    uint32      bucket;//桶号    long        segment_num;//段号    long        segment_ndx;//    HASHSEGMENT segp;//段    HASHBUCKET  currBucket;//当前桶号    HASHBUCKET *prevBucketPtr;//上一个桶号    HashCompareFunc match;//是否match?#if HASH_STATISTICS//统计信息    hash_accesses++;    hctl->accesses++;#endif    /*     * If inserting, check if it is time to split a bucket.     * 如正插入,检查是否需要分裂哈希桶     *     * NOTE: failure to expand table is not a fatal error, it just means we     * have to run at higher fill factor than we wanted.  However, if we're     * using the palloc allocator then it will throw error anyway on     * out-of-memory, so we must do this before modifying the table.     * 注意:扩展哈希表出现问题不是致命错误,只是意味着我们不得不执行比我们期望更高更高的填充因子.     * 但是,如果我们正在使用palloc分配器,那么只要出现内存溢出则会抛出错误,     *   因此我们不需在更新表前完成这个事情.     */    if (action == HASH_ENTER || action == HASH_ENTER_NULL)    {        /*         * Can't split if running in partitioned mode, nor if frozen, nor if         * table is the subject of any active hash_seq_search scans.  Strange         * order of these tests is to try to check cheaper conditions first.         * 如在分区模式/冻结/处于其他活动hash_seq_search扫描期间,则不能进行分裂.         * 奇怪的是,这些测试的顺序是先尝试检查成本更低的条件.         */        if (!IS_PARTITIONED(hctl) && !hashp->frozen &&            hctl->freeList[0].nentries / (long) (hctl->max_bucket + 1) >= hctl->ffactor &&            !has_seq_scans(hashp))            (void) expand_table(hashp);    }    /*     * Do the initial lookup     * 执行初始化检索     */    //计算桶号    bucket = calc_bucket(hctl, hashvalue);    //计算段号和段内编号    segment_num = bucket >> hashp->sshift;    segment_ndx = MOD(bucket, hashp->ssize);    //获取directory    segp = hashp->dir[segment_num];    if (segp == NULL)        hash_corrupted(hashp);    //记录桶号    prevBucketPtr = &segp[segment_ndx];    currBucket = *prevBucketPtr;    /*     * Follow collision chain looking for matching key     * 沿着哈希键冲突链搜索匹配键     */    //匹配函数    match = hashp->match;       /* save one fetch in inner loop */    //键大小    keysize = hashp->keysize;   /* ditto */    while (currBucket != NULL)    {        if (currBucket->hashvalue == hashvalue &&            match(ELEMENTKEY(currBucket), keyPtr, keysize) == 0)            break;        prevBucketPtr = &(currBucket->link);        currBucket = *prevBucketPtr;#if HASH_STATISTICS        hash_collisions++;        hctl->collisions++;#endif    }    //结果赋值    if (foundPtr)        *foundPtr = (bool) (currBucket != NULL);    /*     * OK, now what?     * 根据action执行相关操作     */    switch (action)    {        case HASH_FIND:            //搜索            if (currBucket != NULL)                return (void *) ELEMENTKEY(currBucket);            return NULL;        case HASH_REMOVE:            //移除            if (currBucket != NULL)            {                /* if partitioned, must lock to touch nentries and freeList */                //如分区,在访问条目入口和空闲链表时必须先请求锁                if (IS_PARTITIONED(hctl))                    SpinLockAcquire(&(hctl->freeList[freelist_idx].mutex));                /* delete the record from the appropriate nentries counter. */                //修改nentries计数器                Assert(hctl->freeList[freelist_idx].nentries > 0);                hctl->freeList[freelist_idx].nentries--;                /* remove record from hash bucket's chain. */                //在哈希桶中链中删除记录                *prevBucketPtr = currBucket->link;                /* add the record to the appropriate freelist. */                //添加记录到正确的空闲链表上                currBucket->link = hctl->freeList[freelist_idx].freeList;                hctl->freeList[freelist_idx].freeList = currBucket;                if (IS_PARTITIONED(hctl))                    //释放锁                    SpinLockRelease(&hctl->freeList[freelist_idx].mutex);                /*                 * better hope the caller is synchronizing access to this                 * element, because someone else is going to reuse it the next                 * time something is added to the table                 * 调用者最好是同步访问元素,因为其他进程在下一次添加到哈希表可以复用.                 */                return (void *) ELEMENTKEY(currBucket);            }            return NULL;        case HASH_ENTER_NULL:            /* ENTER_NULL does not work with palloc-based allocator */            //验证分配器            Assert(hashp->alloc != DynaHashAlloc);            /* FALL THRU */            //继续往下执行        case HASH_ENTER:            /* Return existing element if found, else create one */            //如找到,则返回现存的元素,否则创建一个            if (currBucket != NULL)                return (void *) ELEMENTKEY(currBucket);            /* disallow inserts if frozen */            //如冻结,则不允许插入,报错            if (hashp->frozen)                elog(ERROR, "cannot insert into frozen hashtable \"%s\"",                     hashp->tabname);            //获取当前桶            currBucket = get_hash_entry(hashp, freelist_idx);            if (currBucket == NULL)            {                //如为NULL                /* out of memory */                //内存溢出                if (action == HASH_ENTER_NULL)                    return NULL;                /* report a generic message */                //报错                if (hashp->isshared)                    ereport(ERROR,                            (errcode(ERRCODE_OUT_OF_MEMORY),                             errmsg("out of shared memory")));                else                    ereport(ERROR,                            (errcode(ERRCODE_OUT_OF_MEMORY),                             errmsg("out of memory")));            }            //正常            /* link into hashbucket chain */            //连接到哈希桶链中            *prevBucketPtr = currBucket;            currBucket->link = NULL;            /* copy key into record */            //拷贝键到记录中            currBucket->hashvalue = hashvalue;            hashp->keycopy(ELEMENTKEY(currBucket), keyPtr, keysize);            /*             * Caller is expected to fill the data field on return.  DO NOT             * insert any code that could possibly throw error here, as doing             * so would leave the table entry incomplete and hence corrupt the             * caller's data structure.             * 调用者期望在返回时已填充了数据.             * 不要插入有可能抛出异常的代码,因为这样做可能会导致哈希表条目不完整并因此破坏调用者的数据结构             */            return (void *) ELEMENTKEY(currBucket);    }    //如执行到这里,那程序就有问题了.    elog(ERROR, "unrecognized hash action code: %d", (int) action);    //返回NULL,让编译器shut up    return NULL;                /* keep compiler quiet */}/* Convert a hash value to a bucket number *///转换hash值为桶号static inline uint32calc_bucket(HASHHDR *hctl, uint32 hash_val){    uint32      bucket;//桶号    bucket = hash_val & hctl->high_mask;//执行&操作    if (bucket > hctl->max_bucket)//大于最大桶号,则返回low_mask        bucket = bucket & hctl->low_mask;    return bucket;}/* * Allocate a new hashtable entry if possible; return NULL if out of memory. * (Or, if the underlying space allocator throws error for out-of-memory, * we won't return at all.) * 如可能,分配一个新的哈希表条目.如内存溢出则返回NULL. * (或者,如果依赖的空间分配器因为内存溢出抛出错误,则不会返回任何信息) */static HASHBUCKETget_hash_entry(HTAB *hashp, int freelist_idx){    HASHHDR    *hctl = hashp->hctl;    HASHBUCKET  newElement;    for (;;)    {        //循环        /* if partitioned, must lock to touch nentries and freeList */        //如为分区哈希表,在访问条目和空闲链表时,必须锁定        if (IS_PARTITIONED(hctl))            SpinLockAcquire(&hctl->freeList[freelist_idx].mutex);        /* try to get an entry from the freelist */        //从空闲链表中尝试获取一个条目        newElement = hctl->freeList[freelist_idx].freeList;        if (newElement != NULL)            break;        if (IS_PARTITIONED(hctl))            SpinLockRelease(&hctl->freeList[freelist_idx].mutex);        /*         * No free elements in this freelist.  In a partitioned table, there         * might be entries in other freelists, but to reduce contention we         * prefer to first try to get another chunk of buckets from the main         * shmem allocator.  If that fails, though, we *MUST* root through all         * the other freelists before giving up.  There are multiple callers         * that assume that they can allocate every element in the initially         * requested table size, or that deleting an element guarantees they         * can insert a new element, even if shared memory is entirely full.         * Failing because the needed element is in a different freelist is         * not acceptable.         * 在空闲链表中没有空闲条目.在分区哈希表中,在其他空闲链表中可能存在条目,         *   但为了减少争用,我们期望首先尝试从主shmem分配器中获取桶中的其他chunk.         * 如果失败,我们必须在放弃之前从根节点开始遍历所有其他空闲链表.         * 存在多个调用者假定它们可以在初始的请求哈希表大小内分配每一个元素,         *   或者甚至在共享内存全满的情况下删除元素可以保证它们可以插入一个新元素.         * 之所以失败是因为所需要的元素在不同的空闲链表中是不可接受的.         */        if (!element_alloc(hashp, hctl->nelem_alloc, freelist_idx))        {            //本空闲链表不能分配内存            int         borrow_from_idx;            if (!IS_PARTITIONED(hctl))                //非分区哈希表,返回NULL,意味着内存溢出了.                return NULL;    /* out of memory */            /* try to borrow element from another freelist */            //尝试从其他空闲链表浏览元素            borrow_from_idx = freelist_idx;            for (;;)            {                //------- 开始遍历其他空闲链表                borrow_from_idx = (borrow_from_idx + 1) % NUM_FREELISTS;                if (borrow_from_idx == freelist_idx)                    //已经完成整个空闲链表的遍历,退出                    break;      /* examined all freelists, fail */                //获取自旋锁                SpinLockAcquire(&(hctl->freeList[borrow_from_idx].mutex));                newElement = hctl->freeList[borrow_from_idx].freeList;                if (newElement != NULL)                {                    hctl->freeList[borrow_from_idx].freeList = newElement->link;                    SpinLockRelease(&(hctl->freeList[borrow_from_idx].mutex));                    /* careful: count the new element in its proper freelist */                    //小心:在合适的空闲链表上统计新的元素                    SpinLockAcquire(&hctl->freeList[freelist_idx].mutex);                    hctl->freeList[freelist_idx].nentries++;                    SpinLockRelease(&hctl->freeList[freelist_idx].mutex);                    return newElement;                }                SpinLockRelease(&(hctl->freeList[borrow_from_idx].mutex));            }            /* no elements available to borrow either, so out of memory */            //已无元素,内存溢出            return NULL;        }    }    /* remove entry from freelist, bump nentries */    //从空闲链表中移除条目,bump nentries    hctl->freeList[freelist_idx].freeList = newElement->link;    hctl->freeList[freelist_idx].nentries++;    if (IS_PARTITIONED(hctl))        SpinLockRelease(&hctl->freeList[freelist_idx].mutex);    return newElement;}

三、跟踪分析

测试脚本

10:44:12 (xdb@[local]:5432)testdb=# select * from t1 limit 10;

设置断点,跟踪

(gdb) b hash_search_with_hash_valueBreakpoint 1 at 0xa3a4d7: file dynahash.c, line 925.(gdb) cContinuing.Breakpoint 2, hash_search_with_hash_value (hashp=0x13af8f8, keyPtr=0x7ffdb7d63e40, hashvalue=3920871586, action=HASH_ENTER,     foundPtr=0x7ffdb7d63e3f) at dynahash.c:925925     HASHHDR    *hctl = hashp->hctl;(gdb)

输入参数

(gdb) p *hashp$3 = {hctl = 0x13af990, dir = 0x13afda8, hash = 0xa3bf74 , match = 0x4791a0 ,   keycopy = 0x479690 , alloc = 0xa39589 , hcxt = 0x13af7e0,   tabname = 0x13af958 "LOCALLOCK hash", isshared = false, isfixed = false, frozen = false, keysize = 20, ssize = 256,   sshift = 8}(gdb) p *hashp->hctl$4 = {freeList = {{mutex = 0 '\000', nentries = 0, freeList = 0x13b1378}, {mutex = 0 '\000', nentries = 0,       freeList = 0x0} }, dsize = 256, nsegs = 1, max_bucket = 15, high_mask = 31, low_mask = 15,   keysize = 20, entrysize = 80, num_partitions = 0, ffactor = 1, max_dsize = -1, ssize = 256, sshift = 8, nelem_alloc = 42}(gdb) p *(int *)keyPtr$5 = 16402

1.初始化相关变量,如根据hash值获取idx等

(gdb) n926     int         freelist_idx = FREELIST_IDX(hctl, hashvalue);(gdb) 949     if (action == HASH_ENTER || action == HASH_ENTER_NULL)(gdb) p freelist_idx$6 = 0(gdb) p *hctl->freeList[0].freeList$7 = {link = 0x13b1318, hashvalue = 3920871586}(gdb) (gdb) p hctl->freeList[0]$8 = {mutex = 0 '\000', nentries = 0, freeList = 0x13b1378}

2.如action为插入,检查是否需要分裂哈希桶

(gdb) n956         if (!IS_PARTITIONED(hctl) && !hashp->frozen &&(gdb) 957             hctl->freeList[0].nentries / (long) (hctl->max_bucket + 1) >= hctl->ffactor &&(gdb) 956         if (!IS_PARTITIONED(hctl) && !hashp->frozen &&(gdb)

3.执行相关初始化检索,计算桶号/段号/段内编号等

(gdb) p bucket$9 = 2(gdb) p segment_num$10 = 0(gdb) p segment_ndx$11 = 2(gdb) p segp$12 = (HASHSEGMENT) 0x13b05c0(gdb) p *segp$13 = (HASHBUCKET) 0x0(gdb) (gdb) n975     prevBucketPtr = &segp[segment_ndx];(gdb) 976     currBucket = *prevBucketPtr;(gdb) p$14 = (HASHBUCKET) 0x0(gdb) n981     match = hashp->match;       /* save one fetch in inner loop */(gdb) p currBucket$15 = (HASHBUCKET) 0x0(gdb)

4.沿着哈希键冲突链搜索匹配键,根据搜索结果给foundPtr赋值

(gdb) n982     keysize = hashp->keysize;   /* ditto */(gdb) 984     while (currBucket != NULL)(gdb) p keysize$16 = 20(gdb) p match$17 = (HashCompareFunc) 0x4791a0 (gdb) p currBucket$18 = (HASHBUCKET) 0x0(gdb) n997     if (foundPtr)(gdb) 998         *foundPtr = (bool) (currBucket != NULL);(gdb)

5.根据输入action执行相应的逻辑
5.1HASH_FIND,搜索
5.2HASH_REMOVE,移除
5.3HASH_ENTER_NULL,验证分配器,转至HASH_ENTER
5.4HASH_ENTER,如找到,则返回现存的元素,否则创建一个

(gdb) 1003        switch (action)(gdb) p action$19 = HASH_ENTER(gdb) n1047                if (currBucket != NULL)(gdb) 1051                if (hashp->frozen)(gdb) 1055                currBucket = get_hash_entry(hashp, freelist_idx);(gdb)

进入get_hash_entry,如可能,分配一个新的哈希表条目.如内存溢出则返回NULL.

1055                currBucket = get_hash_entry(hashp, freelist_idx);(gdb) stepget_hash_entry (hashp=0x13af8f8, freelist_idx=0) at dynahash.c:12521252        HASHHDR    *hctl = hashp->hctl;(gdb) (gdb) n1258            if (IS_PARTITIONED(hctl))(gdb) p *hctl$20 = {freeList = {{mutex = 0 '\000', nentries = 0, freeList = 0x13b1378}, {mutex = 0 '\000', nentries = 0,       freeList = 0x0} }, dsize = 256, nsegs = 1, max_bucket = 15, high_mask = 31, low_mask = 15,   keysize = 20, entrysize = 80, num_partitions = 0, ffactor = 1, max_dsize = -1, ssize = 256, sshift = 8, nelem_alloc = 42}(gdb) n1262            newElement = hctl->freeList[freelist_idx].freeList;(gdb) 1264            if (newElement != NULL)(gdb) p newElement$21 = (HASHBUCKET) 0x13b1378(gdb) p *newElement$22 = {link = 0x13b1318, hashvalue = 3920871586}(gdb) n1265                break;(gdb) 1322        hctl->freeList[freelist_idx].freeList = newElement->link;(gdb) 1323        hctl->freeList[freelist_idx].nentries++;(gdb) 1325        if (IS_PARTITIONED(hctl))(gdb) p *newElement->link$23 = {link = 0x13b12b8, hashvalue = 2593617408}(gdb) n1328        return newElement;(gdb) 1329    }(gdb)

回到hash_search_with_hash_value

hash_search_with_hash_value (hashp=0x13af8f8, keyPtr=0x7ffdb7d63e40, hashvalue=3920871586, action=HASH_ENTER,     foundPtr=0x7ffdb7d63e3f) at dynahash.c:10561056                if (currBucket == NULL)(gdb) n1073                *prevBucketPtr = currBucket;(gdb) p *currBucket$24 = {link = 0x13b1318, hashvalue = 3920871586}(gdb) n1074                currBucket->link = NULL;(gdb) 1077                currBucket->hashvalue = hashvalue;(gdb) 1078                hashp->keycopy(ELEMENTKEY(currBucket), keyPtr, keysize);(gdb) p *currBucket$25 = {link = 0x0, hashvalue = 3920871586}(gdb) p *prevBucketPtr$26 = (HASHBUCKET) 0x13b1378(gdb) p **prevBucketPtr$27 = {link = 0x0, hashvalue = 3920871586}(gdb) n1087                return (void *) ELEMENTKEY(currBucket);(gdb) p (void *) ELEMENTKEY(currBucket)$28 = (void *) 0x13b1388

找到entry,返回

(gdb) n1093    }(gdb) hash_search (hashp=0x13af8f8, keyPtr=0x7ffdb7d63e40, action=HASH_ENTER, foundPtr=0x7ffdb7d63e3f) at dynahash.c:916916 }(gdb)

到此,相信大家对"PostgreSQL中hash_search_with_hash_value函数有什么作用"有了更深的了解,不妨来实际操作一番吧!这里是网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!

0