本篇內容主要講解“怎么理解PostgreSQL的分區表”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“怎么理解PostgreSQL的分區表”吧!
在PG中,分區表通過"繼承"的方式實現,這里就會存在一個問題,就是在插入數據時,PG如何確定數據應該插入到哪個目標分區?在PG中,通過函數ExecPrepareTupleRouting為路由待插入的元組做準備,主要的目的是確定元組所在的分區。
ModifyTable
ModifyTable Node
通過插入、更新或刪除,將子計劃生成的行應用到結果表。
/* ---------------- * ModifyTable node - * Apply rows produced by subplan(s) to result table(s), * by inserting, updating, or deleting. * 通過插入、更新或刪除,將子計劃生成的行應用到結果表。 * * If the originally named target table is a partitioned table, both * nominalRelation and rootRelation contain the RT index of the partition * root, which is not otherwise mentioned in the plan. Otherwise rootRelation * is zero. However, nominalRelation will always be set, as it's the rel that * EXPLAIN should claim is the INSERT/UPDATE/DELETE target. * 如果最初命名的目標表是分區表,則nominalRelation和rootRelation都包含分區根的RT索引,計劃中沒有另外提到這個索引。 * 否則,根關系為零。但是,總是會設置名義關系,nominalRelation因為EXPLAIN應該聲明的rel是INSERT/UPDATE/DELETE目標關系。 * * Note that rowMarks and epqParam are presumed to be valid for all the * subplan(s); they can't contain any info that varies across subplans. * 注意,rowMarks和epqParam被假定對所有子計劃有效; * 它們不能包含任何在子計劃中變化的信息。 * ---------------- */ typedef struct ModifyTable { Plan plan; CmdType operation; /* 操作類型;INSERT, UPDATE, or DELETE */ bool canSetTag; /* 是否需要設置tag?do we set the command tag/es_processed? */ Index nominalRelation; /* 用于EXPLAIN的父RT索引;Parent RT index for use of EXPLAIN */ Index rootRelation; /* 根Root RT索引(如目標為分區表);Root RT index, if target is partitioned */ bool partColsUpdated; /* 更新了層次結構中的分區關鍵字;some part key in hierarchy updated */ List *resultRelations; /* RT索引的整型鏈表;integer list of RT indexes */ int resultRelIndex; /* 計劃鏈表中第一個resultRel的索引;index of first resultRel in plan's list */ int rootResultRelIndex; /* 分區表根索引;index of the partitioned table root */ List *plans; /* 生成源數據的計劃鏈表;plan(s) producing source data */ List *withCheckOptionLists; /* 每一個目標表均具備的WCO鏈表;per-target-table WCO lists */ List *returningLists; /* 每一個目標表均具備的RETURNING鏈表;per-target-table RETURNING tlists */ List *fdwPrivLists; /* 每一個目標表的FDW私有數據鏈表;per-target-table FDW private data lists */ Bitmapset *fdwDirectModifyPlans; /* FDW DM計劃索引位圖;indices of FDW DM plans */ List *rowMarks; /* rowMarks鏈表;PlanRowMarks (non-locking only) */ int epqParam; /* EvalPlanQual再解析使用的參數ID;ID of Param for EvalPlanQual re-eval */ OnConflictAction onConflictAction; /* ON CONFLICT action */ List *arbiterIndexes; /* 沖突仲裁器索引表;List of ON CONFLICT arbiter index OIDs */ List *onConflictSet; /* SET for INSERT ON CONFLICT DO UPDATE */ Node *onConflictWhere; /* WHERE for ON CONFLICT UPDATE */ Index exclRelRTI; /* RTI of the EXCLUDED pseudo relation */ List *exclRelTlist; /* 已排除偽關系的投影列鏈表;tlist of the EXCLUDED pseudo relation */ } ModifyTable;
ResultRelInfo
ResultRelInfo結構體
每當更新一個現有的關系時,我們必須更新關系上的索引,也許還需要觸發觸發器。ResultRelInfo保存關于結果關系所需的所有信息,包括索引。
/* * ResultRelInfo * ResultRelInfo結構體 * * Whenever we update an existing relation, we have to update indexes on the * relation, and perhaps also fire triggers. ResultRelInfo holds all the * information needed about a result relation, including indexes. * 每當更新一個現有的關系時,我們必須更新關系上的索引,也許還需要觸發觸發器。 * ResultRelInfo保存關于結果關系所需的所有信息,包括索引。 * * Normally, a ResultRelInfo refers to a table that is in the query's * range table; then ri_RangeTableIndex is the RT index and ri_RelationDesc * is just a copy of the relevant es_relations[] entry. But sometimes, * in ResultRelInfos used only for triggers, ri_RangeTableIndex is zero * and ri_RelationDesc is a separately-opened relcache pointer that needs * to be separately closed. See ExecGetTriggerResultRel. * 通常,ResultRelInfo是指查詢范圍表中的表; * ri_RangeTableIndex是RT索引,而ri_RelationDesc只是相關es_relations[]條目的副本。 * 但有時,在只用于觸發器的ResultRelInfos中,ri_RangeTableIndex為零(NULL), * 而ri_RelationDesc是一個需要單獨關閉單獨打開的relcache指針。 * 具體可參考ExecGetTriggerResultRel結構體。 */ typedef struct ResultRelInfo { NodeTag type; /* result relation's range table index, or 0 if not in range table */ //RTE索引 Index ri_RangeTableIndex; /* relation descriptor for result relation */ //結果/目標relation的描述符 Relation ri_RelationDesc; /* # of indices existing on result relation */ //目標關系中索引數目 int ri_NumIndices; /* array of relation descriptors for indices */ //索引的關系描述符數組(索引視為一個relation) RelationPtr ri_IndexRelationDescs; /* array of key/attr info for indices */ //索引的鍵/屬性數組 IndexInfo **ri_IndexRelationInfo; /* triggers to be fired, if any */ //觸發的索引 TriggerDesc *ri_TrigDesc; /* cached lookup info for trigger functions */ //觸發器函數(緩存) FmgrInfo *ri_TrigFunctions; /* array of trigger WHEN expr states */ //WHEN表達式狀態的觸發器數組 ExprState **ri_TrigWhenExprs; /* optional runtime measurements for triggers */ //可選的觸發器運行期度量器 Instrumentation *ri_TrigInstrument; /* FDW callback functions, if foreign table */ //FDW回調函數 struct FdwRoutine *ri_FdwRoutine; /* available to save private state of FDW */ //可用于存儲FDW的私有狀態 void *ri_FdwState; /* true when modifying foreign table directly */ //直接更新FDW時為T bool ri_usesFdwDirectModify; /* list of WithCheckOption's to be checked */ //WithCheckOption鏈表 List *ri_WithCheckOptions; /* list of WithCheckOption expr states */ //WithCheckOption表達式鏈表 List *ri_WithCheckOptionExprs; /* array of constraint-checking expr states */ //約束檢查表達式狀態數組 ExprState **ri_ConstraintExprs; /* for removing junk attributes from tuples */ //用于從元組中刪除junk屬性 JunkFilter *ri_junkFilter; /* list of RETURNING expressions */ //RETURNING表達式鏈表 List *ri_returningList; /* for computing a RETURNING list */ //用于計算RETURNING鏈表 ProjectionInfo *ri_projectReturning; /* list of arbiter indexes to use to check conflicts */ //用于檢查沖突的仲裁器索引的列表 List *ri_onConflictArbiterIndexes; /* ON CONFLICT evaluation state */ //ON CONFLICT解析狀態 OnConflictSetState *ri_onConflict; /* partition check expression */ //分區檢查表達式鏈表 List *ri_PartitionCheck; /* partition check expression state */ //分區檢查表達式狀態 ExprState *ri_PartitionCheckExpr; /* relation descriptor for root partitioned table */ //分區root根表描述符 Relation ri_PartitionRoot; /* Additional information specific to partition tuple routing */ //額外的分區元組路由信息 struct PartitionRoutingInfo *ri_PartitionInfo; } ResultRelInfo;
PartitionRoutingInfo
PartitionRoutingInfo結構體
分區路由信息,用于將元組路由到表分區的結果關系信息。
/* * PartitionRoutingInfo * PartitionRoutingInfo - 分區路由信息 * * Additional result relation information specific to routing tuples to a * table partition. * 用于將元組路由到表分區的結果關系信息。 */ typedef struct PartitionRoutingInfo { /* * Map for converting tuples in root partitioned table format into * partition format, or NULL if no conversion is required. * 映射,用于將根分區表格式的元組轉換為分區格式,如果不需要轉換,則轉換為NULL。 */ TupleConversionMap *pi_RootToPartitionMap; /* * Map for converting tuples in partition format into the root partitioned * table format, or NULL if no conversion is required. * 映射,用于將分區格式的元組轉換為根分區表格式,如果不需要轉換,則轉換為NULL。 */ TupleConversionMap *pi_PartitionToRootMap; /* * Slot to store tuples in partition format, or NULL when no translation * is required between root and partition. * 以分區格式存儲元組的slot.在根分區和分區之間不需要轉換時為NULL。 */ TupleTableSlot *pi_PartitionTupleSlot; } PartitionRoutingInfo;
TupleConversionMap
TupleConversionMap結構體,用于存儲元組轉換映射信息.
typedef struct TupleConversionMap { TupleDesc indesc; /* 源行類型的描述符;tupdesc for source rowtype */ TupleDesc outdesc; /* 結果行類型的描述符;tupdesc for result rowtype */ AttrNumber *attrMap; /* 輸入字段的索引信息,0表示NULL;indexes of input fields, or 0 for null */ Datum *invalues; /* 析構源數據的工作空間;workspace for deconstructing source */ bool *inisnull; //是否為NULL標記數組 Datum *outvalues; /* 構造結果的工作空間;workspace for constructing result */ bool *outisnull; //null標記 } TupleConversionMap;
ExecPrepareTupleRouting函數確定要插入slot中的tuple所屬的分區,同時修改mtstate和estate等相關信息,為后續實際的插入作準備。
/* * ExecPrepareTupleRouting --- prepare for routing one tuple * ExecPrepareTupleRouting --- 為路由一個元組做準備 * * Determine the partition in which the tuple in slot is to be inserted, * and modify mtstate and estate to prepare for it. * 確定要插入slot中tuple的分區,并修改mtstate和estate以為插入作準備。 * * Caller must revert the estate changes after executing the insertion! * In mtstate, transition capture changes may also need to be reverted. * 調用方必須在執行插入之后恢復estate中被修改的屬性值! * 在mtstate中,轉換捕獲更改也可能需要恢復。 * * Returns a slot holding the tuple of the partition rowtype. * 返回包含分區rowtype元組的槽位。 */ static TupleTableSlot * ExecPrepareTupleRouting(ModifyTableState *mtstate, EState *estate, PartitionTupleRouting *proute, ResultRelInfo *targetRelInfo, TupleTableSlot *slot) { ModifyTable *node;//ModifyTable節點 int partidx;//分區索引 ResultRelInfo *partrel;//ResultRelInfo結構體指針(數組) HeapTuple tuple;//元組 /* * Determine the target partition. If ExecFindPartition does not find a * partition after all, it doesn't return here; otherwise, the returned * value is to be used as an index into the arrays for the ResultRelInfo * and TupleConversionMap for the partition. * 確定目標分區。 * 如果ExecFindPartition最終沒有找到分區,它不會在這里返回; * 否則,返回值將用作分區的ResultRelInfo和TupleConversionMap數組的索引。 */ partidx = ExecFindPartition(targetRelInfo, proute->partition_dispatch_info, slot, estate); Assert(partidx >= 0 && partidx < proute->num_partitions); /* * Get the ResultRelInfo corresponding to the selected partition; if not * yet there, initialize it. * 獲取與所選分區對應的ResultRelInfo;如果還沒有,則初始化。 */ partrel = proute->partitions[partidx]; if (partrel == NULL) partrel = ExecInitPartitionInfo(mtstate, targetRelInfo, proute, estate, partidx); /* * Check whether the partition is routable if we didn't yet * 檢查分區是否可路由 * * Note: an UPDATE of a partition key invokes an INSERT that moves the * tuple to a new partition. This check would be applied to a subplan * partition of such an UPDATE that is chosen as the partition to route * the tuple to. The reason we do this check here rather than in * ExecSetupPartitionTupleRouting is to avoid aborting such an UPDATE * unnecessarily due to non-routable subplan partitions that may not be * chosen for update tuple movement after all. * 注意:分區鍵的更新調用將元組移動到新分區的插入。 * 此檢查將應用于此類更新的子計劃分區,該分區被選擇為將元組路由到的分區。 * 在這里而不是在ExecSetupPartitionTupleRouting中執行此檢查的原因是 為了避免由于無法路由的子計劃分區而不必要地中止這樣的更新,這些分區可能最終不會被選擇用于更新元組移動。 */ if (!partrel->ri_PartitionReadyForRouting) { /* Verify the partition is a valid target for INSERT. */ //驗證分區是否可用于INSERT CheckValidResultRel(partrel, CMD_INSERT); /* Set up information needed for routing tuples to the partition. */ //設置將元組路由到分區所需的信息。 ExecInitRoutingInfo(mtstate, estate, proute, partrel, partidx); } /* * Make it look like we are inserting into the partition. * 讓它看起來像是插入到分區中。 */ estate->es_result_relation_info = partrel; /* Get the heap tuple out of the given slot. */ //從給定的slot中獲取heap tuple tuple = ExecMaterializeSlot(slot); /* * If we're capturing transition tuples, we might need to convert from the * partition rowtype to parent rowtype. * 如果正在捕獲轉換元組,可能需要將分區行類型轉換為根分區表的行類型。 */ if (mtstate->mt_transition_capture != NULL) { if (partrel->ri_TrigDesc && partrel->ri_TrigDesc->trig_insert_before_row) { /* * If there are any BEFORE triggers on the partition, we'll have * to be ready to convert their result back to tuplestore format. * 如果分區上有BEFORE觸發器,必須準備將它們的結果轉換回tuplestore格式。 */ mtstate->mt_transition_capture->tcs_original_insert_tuple = NULL; mtstate->mt_transition_capture->tcs_map = TupConvMapForLeaf(proute, targetRelInfo, partidx); } else { /* * Otherwise, just remember the original unconverted tuple, to * avoid a needless round trip conversion. * 否則,只需記住原始的未轉換元組,以避免不必要的來回轉換。 */ mtstate->mt_transition_capture->tcs_original_insert_tuple = tuple; mtstate->mt_transition_capture->tcs_map = NULL; } } if (mtstate->mt_oc_transition_capture != NULL) { mtstate->mt_oc_transition_capture->tcs_map = TupConvMapForLeaf(proute, targetRelInfo, partidx); } /* * Convert the tuple, if necessary. * 如需要,轉換元組 */ ConvertPartitionTupleSlot(proute->parent_child_tupconv_maps[partidx], tuple, proute->partition_tuple_slot, &slot); /* Initialize information needed to handle ON CONFLICT DO UPDATE. */ //如為ON CONFLICT DO UPDATE模式,則初始化相關信息 Assert(mtstate != NULL); node = (ModifyTable *) mtstate->ps.plan; if (node->onConflictAction == ONCONFLICT_UPDATE) { Assert(mtstate->mt_existing != NULL); ExecSetSlotDescriptor(mtstate->mt_existing, RelationGetDescr(partrel->ri_RelationDesc)); Assert(mtstate->mt_conflproj != NULL); ExecSetSlotDescriptor(mtstate->mt_conflproj, partrel->ri_onConflict->oc_ProjTupdesc); } return slot; } /* * ExecFetchSlotHeapTuple - fetch HeapTuple representing the slot's content * ExecFetchSlotHeapTuple - 根據slot提取HeapTuple * * The returned HeapTuple represents the slot's content as closely as * possible. * 返回的HeapTuple盡可能就是slot的內容。 * * If materialize is true, the contents of the slots will be made independent * from the underlying storage (i.e. all buffer pins are release, memory is * allocated in the slot's context). * 如果materialize為T,slot的內容將獨立于底層存儲(即釋放所有緩沖區pin,在slot的上下文中分配內存)。 * * If shouldFree is not-NULL it'll be set to true if the returned tuple has * been allocated in the calling memory context, and must be freed by the * caller (via explicit pfree() or a memory context reset). * 如果shouldFree not-NULL,那么如果返回的元組已經在調用內存上下文中分配, * 并且必須由調用方釋放(通過顯式pfree()或內存上下文重置)。 * * NB: If materialize is true, modifications of the returned tuple are * allowed. But it depends on the type of the slot whether such modifications * will also affect the slot's contents. While that is not the nicest * behaviour, all such modifcations are in the process of being removed. * 注意:如果materialize為T,則允許修改返回的元組。 * 但這取決于slot的類型,這種修改是否也會影響slot的內容。 * 雖然這不是最好的行為,但所有這些修改都在被移除的過程中。 */ HeapTuple ExecFetchSlotHeapTuple(TupleTableSlot *slot, bool materialize, bool *shouldFree) { /* * sanity checks * 安全檢查 */ Assert(slot != NULL); Assert(!TTS_EMPTY(slot)); /* Materialize the tuple so that the slot "owns" it, if requested. */ //物化元組,以便slot“擁有”它(如要求)。 if (materialize) slot->tts_ops->materialize(slot); if (slot->tts_ops->get_heap_tuple == NULL) { if (shouldFree) *shouldFree = true; return slot->tts_ops->copy_heap_tuple(slot);//返回slot拷貝 } else { if (shouldFree) *shouldFree = false; return slot->tts_ops->get_heap_tuple(slot);//直接返回slot } }
測試腳本如下
-- Hash Partition drop table if exists t_hash_partition; create table t_hash_partition (c1 int not null,c2 varchar(40),c3 varchar(40)) partition by hash(c1); create table t_hash_partition_1 partition of t_hash_partition for values with (modulus 6,remainder 0); create table t_hash_partition_2 partition of t_hash_partition for values with (modulus 6,remainder 1); create table t_hash_partition_3 partition of t_hash_partition for values with (modulus 6,remainder 2); create table t_hash_partition_4 partition of t_hash_partition for values with (modulus 6,remainder 3); create table t_hash_partition_5 partition of t_hash_partition for values with (modulus 6,remainder 4); create table t_hash_partition_6 partition of t_hash_partition for values with (modulus 6,remainder 5); -- delete from t_hash_partition where c1 = 0; insert into t_hash_partition(c1,c2,c3) VALUES(0,'HASH0','HAHS0');
啟動gdb,設置斷點,進入ExecPrepareTupleRouting
(gdb) b ExecPrepareTupleRouting Breakpoint 1 at 0x710b1e: file nodeModifyTable.c, line 1712. (gdb) c Continuing. Breakpoint 1, ExecPrepareTupleRouting (mtstate=0x1e4de60, estate=0x1e4daf8, proute=0x1e4eb48, targetRelInfo=0x1e4dd48, slot=0x1e4e4e0) at nodeModifyTable.c:1712 1712 partidx = ExecFindPartition(targetRelInfo,
查看函數調用棧
ExecPrepareTupleRouting在ExecModifyTable Node中被調用,為后續的插入作準備.
(gdb) bt #0 ExecPrepareTupleRouting (mtstate=0x1e4de60, estate=0x1e4daf8, proute=0x1e4eb48, targetRelInfo=0x1e4dd48, slot=0x1e4e4e0) at nodeModifyTable.c:1712 #1 0x0000000000711602 in ExecModifyTable (pstate=0x1e4de60) at nodeModifyTable.c:2157 #2 0x00000000006e4c30 in ExecProcNodeFirst (node=0x1e4de60) at execProcnode.c:445 #3 0x00000000006d9974 in ExecProcNode (node=0x1e4de60) at ../../../src/include/executor/executor.h:237 #4 0x00000000006dc22d in ExecutePlan (estate=0x1e4daf8, planstate=0x1e4de60, use_parallel_mode=false, operation=CMD_INSERT, sendTuples=false, numberTuples=0, direction=ForwardScanDirection, dest=0x1e67e90, execute_once=true) at execMain.c:1723 #5 0x00000000006d9f5c in standard_ExecutorRun (queryDesc=0x1e39d68, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364 #6 0x00000000006d9d7f in ExecutorRun (queryDesc=0x1e39d68, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:307 #7 0x00000000008cbdb3 in ProcessQuery (plan=0x1e67d18, sourceText=0x1d60ec8 "insert into t_hash_partition(c1,c2,c3) VALUES(0,'HASH0','HAHS0');", params=0x0, queryEnv=0x0, dest=0x1e67e90, completionTag=0x7ffdcf148b20 "") at pquery.c:161 #8 0x00000000008cd6f9 in PortalRunMulti (portal=0x1dc6538, isTopLevel=true, setHoldSnapshot=false, dest=0x1e67e90, altdest=0x1e67e90, completionTag=0x7ffdcf148b20 "") at pquery.c:1286 #9 0x00000000008cccb9 in PortalRun (portal=0x1dc6538, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1e67e90, altdest=0x1e67e90, completionTag=0x7ffdcf148b20 "") at pquery.c:799 #10 0x00000000008c6b1e in exec_simple_query ( query_string=0x1d60ec8 "insert into t_hash_partition(c1,c2,c3) VALUES(0,'HASH0','HAHS0');") at postgres.c:1145 #11 0x00000000008cae70 in PostgresMain (argc=1, argv=0x1d8aba8, dbname=0x1d8aa10 "testdb", username=0x1d5dba8 "xdb") at postgres.c:4182
找到該元組所在的分區
(gdb) n 1716 Assert(partidx >= 0 && partidx < proute->num_partitions); (gdb) p partidx $1 = 2
獲取與所選分區對應的ResultRelInfo;如果還沒有,則初始化
(gdb) n 1722 partrel = proute->partitions[partidx]; (gdb) 1723 if (partrel == NULL) (gdb) p *partrel Cannot access memory at address 0x0 (gdb) n 1724 partrel = ExecInitPartitionInfo(mtstate, targetRelInfo,
初始化后的partrel
(gdb) p *partrel $2 = {type = T_ResultRelInfo, ri_RangeTableIndex = 1, ri_RelationDesc = 0x1e7c940, ri_NumIndices = 0, ri_IndexRelationDescs = 0x0, ri_IndexRelationInfo = 0x0, ri_TrigDesc = 0x0, ri_TrigFunctions = 0x0, ri_TrigWhenExprs = 0x0, ri_TrigInstrument = 0x0, ri_FdwRoutine = 0x0, ri_FdwState = 0x0, ri_usesFdwDirectModify = false, ri_WithCheckOptions = 0x0, ri_WithCheckOptionExprs = 0x0, ri_ConstraintExprs = 0x0, ri_junkFilter = 0x0, ri_returningList = 0x0, ri_projectReturning = 0x0, ri_onConflictArbiterIndexes = 0x0, ri_onConflict = 0x0, ri_PartitionCheck = 0x1e4f538, ri_PartitionCheckExpr = 0x0, ri_PartitionRoot = 0x1e7c2f8, ri_PartitionReadyForRouting = true}
目標分區描述符-->t_hash_partition_3
(gdb) p *partrel->ri_RelationDesc $3 = {rd_node = {spcNode = 1663, dbNode = 16402, relNode = 16995}, rd_smgr = 0x1e34510, rd_refcnt = 1, rd_backend = -1, rd_islocaltemp = false, rd_isnailed = false, rd_isvalid = true, rd_indexvalid = 0 '\000', rd_statvalid = false, rd_createSubid = 0, rd_newRelfilenodeSubid = 0, rd_rel = 0x1e7c1e0, rd_att = 0x1e7cb58, rd_id = 16995, rd_lockInfo = { lockRelId = {relId = 16995, dbId = 16402}}, rd_rules = 0x0, rd_rulescxt = 0x0, trigdesc = 0x0, rd_rsdesc = 0x0, rd_fkeylist = 0x0, rd_fkeyvalid = false, rd_partkeycxt = 0x0, rd_partkey = 0x0, rd_pdcxt = 0x0, rd_partdesc = 0x0, rd_partcheck = 0x1e7aa30, rd_indexlist = 0x0, rd_oidindex = 0, rd_pkindex = 0, rd_replidindex = 0, rd_statlist = 0x0, rd_indexattr = 0x0, rd_projindexattr = 0x0, rd_keyattr = 0x0, rd_pkattr = 0x0, rd_idattr = 0x0, rd_projidx = 0x0, rd_pubactions = 0x0, rd_options = 0x0, rd_index = 0x0, rd_indextuple = 0x0, rd_amhandler = 0, rd_indexcxt = 0x0, rd_amroutine = 0x0, rd_opfamily = 0x0, rd_opcintype = 0x0, rd_support = 0x0, rd_supportinfo = 0x0, rd_indoption = 0x0, rd_indexprs = 0x0, rd_indpred = 0x0, rd_exclops = 0x0, rd_exclprocs = 0x0, rd_exclstrats = 0x0, rd_amcache = 0x0, rd_indcollation = 0x0, rd_fdwroutine = 0x0, rd_toastoid = 0, pgstat_info = 0x1de40b0} ------------------ testdb=# select oid,relname from pg_class where oid=16995; oid | relname -------+-------------------- 16995 | t_hash_partition_3 (1 row) -----------------
該分區是可路由的
(gdb) p partrel->ri_PartitionReadyForRouting $4 = true
設置estate變量(讓它看起來像是插入到分區中)/物化tuple
(gdb) n 1751 estate->es_result_relation_info = partrel; (gdb) 1754 tuple = ExecMaterializeSlot(slot); (gdb) 1760 if (mtstate->mt_transition_capture != NULL) (gdb) p tuple $5 = (HeapTuple) 0x1e4f4e0 (gdb) p *tuple $6 = {t_len = 40, t_self = {ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, t_tableOid = 0, t_data = 0x1e4f4f8} (gdb) (gdb) p *tuple->t_data $7 = {t_choice = {t_heap = {t_xmin = 160, t_xmax = 4294967295, t_field3 = {t_cid = 2249, t_xvac = 2249}}, t_datum = { datum_len_ = 160, datum_typmod = -1, datum_typeid = 2249}}, t_ctid = {ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, t_infomask2 = 3, t_infomask = 2, t_hoff = 24 '\030', t_bits = 0x1e4f50f ""}
mtstate->mt_transition_capture 為NULL,無需處理相關信息
(gdb) p mtstate->mt_transition_capture $8 = (struct TransitionCaptureState *) 0x0 1783 if (mtstate->mt_oc_transition_capture != NULL) (gdb)
如需要,轉換元組
1792 ConvertPartitionTupleSlot(proute->parent_child_tupconv_maps[partidx], (gdb) 1798 Assert(mtstate != NULL); (gdb) 1799 node = (ModifyTable *) mtstate->ps.plan; (gdb) p *mtstate $9 = {ps = {type = T_ModifyTableState, plan = 0x1e59838, state = 0x1e4daf8, ExecProcNode = 0x711056 <ExecModifyTable>, ExecProcNodeReal = 0x711056 <ExecModifyTable>, instrument = 0x0, worker_instrument = 0x0, worker_jit_instrument = 0x0, qual = 0x0, lefttree = 0x0, righttree = 0x0, initPlan = 0x0, subPlan = 0x0, chgParam = 0x0, ps_ResultTupleSlot = 0x1e4ede8, ps_ExprContext = 0x0, ps_ProjInfo = 0x0, scandesc = 0x0}, operation = CMD_INSERT, canSetTag = true, mt_done = false, mt_plans = 0x1e4e078, mt_nplans = 1, mt_whichplan = 0, resultRelInfo = 0x1e4dd48, rootResultRelInfo = 0x0, mt_arowmarks = 0x1e4e098, mt_epqstate = {estate = 0x0, planstate = 0x0, origslot = 0x1e4e4e0, plan = 0x1e59588, arowMarks = 0x0, epqParam = 0}, fireBSTriggers = false, mt_existing = 0x0, mt_excludedtlist = 0x0, mt_conflproj = 0x0, mt_partition_tuple_routing = 0x1e4eb48, mt_transition_capture = 0x0, mt_oc_transition_capture = 0x0, mt_per_subplan_tupconv_maps = 0x0}
返回slot,完成調用
(gdb) n 1800 if (node->onConflictAction == ONCONFLICT_UPDATE) (gdb) 1810 return slot; (gdb) 1811 }
到此,相信大家對“怎么理解PostgreSQL的分區表”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。