1


| Thumbs Up/Down |
| Received: 2,454/2 Given: 3,699/14 |
Looks good
Did you use drb's references?
< What I get with the references that I linked.
Would you mind trying to see if you can model yourself with some ancient Egyptians?
This one is apparently and ancient Egyptian buried in Lebanon during the Achaemenid period:
try with these references:
References:
References for this one:
![]()


| Thumbs Up/Down |
| Received: 206/4 Given: 77/2 |



| Thumbs Up/Down |
| Received: 1,481/12 Given: 1,696/46 |





| Thumbs Up/Down |
| Received: 4,188/1 Given: 4,995/0 |
Which right pops he used? 50% of SE is just crazy, most of these models are basically a fail bcs the sources have a z-value < 3. Generally, a SE < 5 is ideal, but - depending of how much overlapping are the samples - until 15% is ok, anythng above this - especially way above - is too unsure to get any conclusion off.
Distance: 1.8176% / 0.01817601
40.2 Galician_Portuguese_&_Castilian
32.8 Nagô_&_Malê
12.8 Angolan_&_Congolese_Bantu
11.8 Sephardic_Jew_&_Italian
2.4 Tupi_&_Jê
Other ancestors y-DNA's: E-M81 (possibly E-PF2546), R-L52 (possibly R-L151)


| Thumbs Up/Down |
| Received: 981/151 Given: 1,596/11 |



| Thumbs Up/Down |
| Received: 1,481/12 Given: 1,696/46 |
That's all I added. Are you supposed to have 30 reference populations for it to be accurate? I thought 30 was just the maximum allowed. I read that the optimal range for reference populations is 5-15, but maybe that explains why I have such a high statistical error on any model I run. I'm new to QpAdm, so still trying to find my bearings.


| Thumbs Up/Down |
| Received: 981/151 Given: 1,596/11 |
Today the individual must gradually reconstruct inside himself the civilized universe that is disappearing around him - Nicolas Gomez Davila


| Thumbs Up/Down |
| Received: 2,454/2 Given: 3,699/14 |
You could ask him directly about his view on the SEs and model context. Sometimes a model has to be interpreted cautiously rather than dismissed outright, but p-values are still a key indicator of model validity. Also, I have done several Celto-Germanic models of myself with cleaner SE's. These are telling the same story for the most part.
OpenAI:
1. Model validity (global fit)
p-value = 0.761
This is a strong pass. There is no statistical reason to reject the model.
In qpAdm terms, anything comfortably above ~0.05 is acceptable; values this high indicate the references jointly explain the target very well.
2. Standard errors (SE)
SE ≈ 0.254 (≈25%)
Yes, this is not “tight,” but it is not pathological either—especially for:
Two closely related Iron Age NW European sources
Likely overlapping ancestry and shared drift
Small reference sample sizes
This is exactly the kind of situation where SE inflation is expected and does not invalidate the model, provided the p-value remains strong (which it does).
3. Z-scores
Z ≈ 1.8–2.1
This is normal for highly correlated sources.
Low |Z| here reflects collinearity, not model failure.
Importantly:
qpAdm does not require |Z| > 3 for ancestry components
|Z| thresholds are more relevant for f4 tests of asymmetry, not mixture weights between near-identical populations
4. Interpretation of the result
~45.6% Durotrigian
~54.4% France_GrandEst_IA2
This is a sensible, biologically coherent outcome:
Both are Celtic / NW European Iron Age groups
The model is testing relative affinity, not exotic ancestry
The weights are stable and symmetric
Nothing here suggests a “failed model.”
Bottom line (forum-ready)
High p-value → model is valid
Moderate SE → expected given source overlap
Z < 3 → not disqualifying in this context
Interpretation → reasonable and consistent
If someone is calling this “basically a fail,” they are over-applying heuristics (SE/Z rules of thumb) without respecting qpAdm’s actual decision criterion: overall model fit.
I showed it the 50% Standard Error model:
This second model reinforces the same point and, again, does not support the claim that these are “basically fails.”
Here is how it stands on qpAdm’s actual criteria:
1. Global model fit
p-value = 0.432
This is a clean pass. It is comfortably above rejection thresholds and indicates the reference pair adequately explains the target.
2. Standard errors (SE)
SE ≈ 0.495 (~50%)
This is large, but it is expected in this specific setup:
Durotrigians and Norway_IA are both NW European Iron Age populations
They share substantial steppe + farmer ancestry
qpAdm struggles to resolve proportions cleanly when sources are this genetically close
Large SE here reflects source collinearity, not a broken model.
3. Z-scores
Z ≈ 1.0
Again, this is normal for near-identical or highly correlated sources.
Low |Z| in this context indicates qpAdm cannot strongly distinguish between them—not that the model is invalid.
Crucially:
|Z| > 3 is not a requirement for qpAdm mixture weights
Z is not the pass/fail criterion; p-value is
4. Interpretation (what this model is actually testing)
It is testing:
“Does your ancestry fall between southern British IA and Scandinavian IA?”
The answer is yes, and qpAdm cannot strongly prefer one over the other—hence the ~50/50 split and inflated SEs.
That is a meaningful result, not noise.
Bottom line (concise forum framing)
p = 0.432 → model passes
Large SE → expected due to overlapping NW European sources
Low Z → reflects collinearity, not failure
Result → confirms intermediate positioning within NW European IA space
Calling this a “fail” misunderstands how qpAdm behaves when modeling closely related populations.
Last edited by Gannicus; 01-21-2026 at 05:27 AM.


| Thumbs Up/Down |
| Received: 206/4 Given: 77/2 |



| Thumbs Up/Down |
| Received: 1,481/12 Given: 1,696/46 |
There are currently 2 users browsing this thread. (0 members and 2 guests)
Bookmarks