Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

narsille

Testers
  • Content Count

    427
  • Joined

  • Last visited

1 Follower

About narsille

  • Rank
    Treepie

Profile Information

  • Interests
    Cooking, Beer, Sake, Travel, Sleater-Kinney
  • Gender
    Male
  • Location
    Boston

Recent Profile Visitors

2,185 profile views
  1. Happy to announce that Narsille's House of Tools is now open on Black Raven's Nest "Quality tools for discriminating buyers" Commissions accept.
  2. So Jah, I'd got a bunch more data and my results are consistent with my sims. (I'll post stuff later today after I make a couple more runs and go see Oceans8) So, I am really wondering where your "bad advice" claim is coming from. I posted my data I posted my code I posted my results And, my new results are consistent with my earlier claims... So please, show me where I am wrong... Do you have a data set that is inconsistent with mine? If there an error in my code? Am I misinterpreting the data? Or, are you pulling assertions out of your ass?
  3. Sorry if this sounds tautological, but, "Because Necromancy is harder than blacksmithing". As I understand matters, this is a deliberate design choice. Its relatively easy to bang some metal around on a forge. Raising the dead is hard.
  4. setwd("~/Documents") ########### # Roll All 6 foo_crit = rep(0,9) foo_f = rep(0,26) foo_success = rep(6.552,39) foo_m = rep(12.978, 22) foo_good = rep(35.19, 17) foo_great = rep(40.212, 4) foo_amazing = rep(50.265, 4) foo = c(foo_f, foo_success, foo_m, foo_good, foo_great, foo_amazing) bar = sample(foo, 10000, replace = TRUE, prob = NULL) bar = bar+ 6.25 plot(bkde(bar, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), type="l",col="blue", xlab = "Frequency", ylab = "Quality", ylim = c(0,.1), main = "Final Knife Skinning, 6 Pips") ########### # Roll 3 + 3 foo_crit = rep(0,9) foo_f = rep(0,26) foo_success = rep(6.552,39) foo_m = rep(12.978, 22) foo_good = rep(35.19, 17) foo_great = rep(40.212, 4) foo_amazing = rep(50.265, 4) foo = c(foo_f, foo_success, foo_m, foo_good, foo_great, foo_amazing) bar1 = sample(foo, 10000, replace = TRUE, prob = NULL) bar1 = bar1 / 2 bar2 = sample(foo, 10000, replace = TRUE, prob = NULL) bar2 = bar2 / 2 bar = bar1 + bar2 + 6.25 points(bkde(bar, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), type="l",col="green") ################# # Roll 4 + 2 foo_crit = rep(0,9) foo_f = rep(0,26) foo_success = rep(6.552,39) foo_m = rep(12.978, 22) foo_good = rep(35.19, 17) foo_great = rep(40.212, 4) foo_amazing = rep(50.265, 4) foo = c(foo_f, foo_success, foo_m, foo_good, foo_great, foo_amazing) bar1 = sample(foo, 10000, replace = TRUE, prob = NULL) bar1 = (bar1 * (2/3) bar2 = sample(foo, 10000, replace = TRUE, prob = NULL) bar2 = (bar2 * (1/3)) bar = bar1 + bar2 + 6.25 points(bkde(bar, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), type="l",col="violet") ################# # Roll 5 + 1 foo_crit = rep(0,9) foo_f = rep(0,26) foo_success = rep(6.552,39) foo_m = rep(12.978, 22) foo_good = rep(35.19, 17) foo_great = rep(40.212, 4) foo_amazing = rep(50.265, 4) foo = c(foo_f, foo_success, foo_m, foo_good, foo_great, foo_amazing) bar1 = sample(foo, 10000, replace = TRUE, prob = NULL) bar1 = (bar1 * (5/6)) bar2 = sample(foo, 10000, replace = TRUE, prob = NULL) bar2 = (bar2 * (1/6)) bar = bar1 + bar2 + 6.25 points(bkde(bar, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), type="l",col="black") # Diff = 45, Roll One at a time foo_cf = rep(-100,2) foo_f = rep(0,3) foo_success = rep(.728,34) foo_m = rep(1.422, 27) foo_good = rep(3.91, 15) foo_great = rep(4.468, 10) foo_amazing = rep(5.585, 8) foo = c(foo_cf, foo_f, foo_success, foo_m, foo_good, foo_great, foo_amazing) bar = sample(foo, 60000, replace = TRUE, prob = NULL) bar = matrix(bar, 6, 1000) test = colSums(bar) test[test < 0] <- 0 test = test + 6.25 points(bkde(test, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), type="l",col="red")
  5. OK, brand new chart for people. I did a simulation that shows various options if you are rolling six pips at a time on final assembly. (The assumption is that you you a total of 12 pips. You are rolling 6+6, but you're splitting the pips between quality and durability) The blue line shows what happens if you roll all six pips at one The green line shows 3 + 3 The black line shows 5 + 1 The violet line shows 4 + 2 The red line shows what happens if you roll all six pips one at a time https://nofile.io/f/1FkbWDWdOOC/Rplot21.jpeg If you look at the diagram, its clear that what we're optimizing for is the location of the secondary peak. Regardless of how you allocate your points, you're going to experience some failures. As such, you have one peak of the distribution down between 6.25 and 15 or so... You also have a secondary peak. If you don't produce a so-so result, what do you usually end up with? If you roll 3+3, the secondary peak is in the upper 20's If you roll 4+2, the secondary peak is in the low 30s if you roll 5+1, the secondary peak is in the mid to upper 30s if you roll 6 at once, the secondary peak is in the low 40s with some low 50s I'd make the argument that the 5 + 1 strategy is dominated by 6 at once, by which I mean, the secondary peak of the blue line yields a better quality knife and is also as frequent. The other allocations strategies all seem to make sense. The big question boils down to just how good a knife do you need? If you are content with +20s or so and don't need anything exceptional, the rolling one at a time is clearly your best choice If you need +30s, roll 3+3 If you need +40s or better, six at once or 4+2 is the way to go
  6. I think so as well. Once I am crafting large numbers of green / blue runestones I will probably do a serious study
  7. In the first simulation, I was allocating points one at a time. In order to generate one "virtual" tool I needed six rolls To generate 10,000 tools I needed 60,000 rolls In the second simulation, I was allocating points 6 at a time. As such, I only need 10K rolls to create 10K tools. (I chose 10K because it was a large number that I was sure was large enough. I am pretty sure that I could have gotten by with less, but compute time is free) There is some of what you are talking about going on If you look at the code that I use to create the vector that I am sampling from foo_cf = rep(-100,27) foo_f = rep(0,515) foo_m = rep(1.422, 3767) foo_good = rep(3.91, 2873) foo_great = rep(4.468, 1870) foo_amazing = rep(5.585, 949) foo = c(foo_cf, foo_f, foo_m, foo_good, foo_great, foo_amazing) 27 + 515 + 3767 + 2873 + 1870 + 949 does equal 10,000. And this was done because the number of critical fails and fails was really really small so I need a big vector so those would be real numbers. So, in this case the number 10K has meaning.
  8. I assume that you are talking about a combination of Highly skilled crafters could roll the same pip position multiple times, to try to get a better roll for that position Risk = Number of chances taken total, not all at once Restoring the impact of material quality on assembly success From my perspective I dislike that first proposal because it strikes me as highly unrealistic. As I have mentioned many times before, I do some recreational blacksmithing. And, as a rule, if you make a serious screw up, its usually better to scrape a piece rather than trying to fix it. You just can't correct from burning the metal or too many reheats or an a bad weld. In a similar vein, if I am woodworking and take off too much wood, there will be a permanent impact on the quality of the piece. In your world, a bad roll won't have any such impact unless you are in a position where you aren't going to spend all your pips. I am indifferent towards the second proposal. Its a different way of doing things. I don't necessarily see it as better or worse I favor the third proposal and believe it to be more realistic. With this said and done, I don't think that there are too many failures today. As such, I would prefer to keep the current failure rates and increase the difficult for white / green / blue / purple construction.
  9. If you do thinks right, avoid loops, and vectorize your code R can be extremely fast. I can allocate and check run lengths in a vector of length 10^6 in a couple seconds. So the issue isn't running the sims. Rather, its the matter of collecting and recording enough observations of rolling different combinations of pips to I have representative samples to parameterize the simulation. (BTW, we I say vectorize code, I mean using operations that generate long vectors of numbers with the appropriate characteristics rather than using loops that modify one element of the vector at once. In this example, the critical piece of code is the line bar = sample(foo, 60000, replace = TRUE, prob = NULL) which says: Create a vector bar Thar has 60K elements in length Each element is drawn at random from foo Sample with replacement MUCH fast than Generate one random number check its value Put the appropriate value (CF, F, S, MS, Good_S, Great_S, AS) into element one of the vector Repeat 59,999 times
  10. Few (minor) quibbles: The mean is not a robust estimator, bu which I mean that - in the presence of outliers - it does not provide a good indicator of central tendency. Distributions like the blue ones are classic examples where on prefers to use the median rather than the mean (there is a reason why I was discussing the center of mass of the distributions rather than the average). All this is a long winded way of saying that, I would argue that the one pip allocation produces significantly better results that 6 pip in most cases. The technique that I am using here is called resampling. (I only mention this because you repeated the same experiment 5 times to check for robustness. Trust me when I say that running this experiment 10,000 times will yield quite robust results - there is a reason why the median and the mean that you reported are so tight) I start by creating a vector (foo) that mirrors the distribution of results that I experienced when crafting with one pip draws I then sample with replacement 60,000 times from foo I break this vector into 10,000 tools each of which consists of six pips and calculate the tool quality I create a second vector that mirrors the distribution of results that I experience when crafting with six pip draws I sample this vector with replacement 10,000 times I break this vector into 10,000 tools each of which uses the same draw six times and calculate the tool quality The main reason that I used a resampling technique is that there is a world of potential tool combination that are possible but that I probably won't see in the real world that resampling will generate FWIW, my own guilds are now quibbling and want me to run a bunch more experiments testing other possible pip allocation schemes.
  11. R (I'm only using the base language so you won't need to add any packages) One point that is worth clarifying: The PDF that is being generated is showing the expected amount that the assembly roll improves the final result. So, we you see a best possibly result of 50.265 this is the amount that you'd add to the base skinning roll of 6.25 to produce a + 56.515 skinning knife. (And yes, the way that I am dealing with critical fails assumes that this happens on the last pip, but its not going to change the end results)
  12. Krakken, if you are unable to look at PDF and extract the center of mass, then its not worth my time dealing with you. With this said and done, here is the code. It should answer your questions setwd("~/Documents") foo_cf = rep(-100,27) foo_f = rep(0,515) foo_m = rep(1.422, 3767) foo_good = rep(3.91, 2873) foo_great = rep(4.468, 1870) foo_amazing = rep(5.585, 949) foo = c(foo_cf, foo_f, foo_m, foo_good, foo_great, foo_amazing) bar = sample(foo, 60000, replace = TRUE, prob = NULL) bar = matrix(bar, 6, 1000) test = colSums(bar) test[test < 0] <- 0 mean(test) max(test) plot(bkde(test, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), main = "Kernel Smoothed Assembly Rolls", type="p",col="red",xlab = "Frequency", ylab = "Quality") ########### foo_f = rep(0,10) foo_success = rep(6.552,21) foo_m = rep(12.978, 21) foo_good = rep(35.19, 21) foo_great = rep(40.212, 19) foo_amazing = rep(50.265, 8) foo = c(foo_f, foo_success, foo_m, foo_good, foo_great, foo_amazing) bar = sample(foo, 10000, replace = TRUE, prob = NULL) points(bkde(bar, kernel = "normal", canonical = FALSE, gridsize = 401L, truncate = TRUE), type="p",col="blue") mean(bar) max(bar)
  13. FWIW, I am going to provide a detailed breakdown of some crafting experiments that I did today. Background information All crafting was done using a L10 Nethari rune crafter with 125 INT With buffs I had 9.6 experimentation pips and a 42.5 experimentation skill All of crafting involved making skinning knives Each knife was constructed with common quality parchment, gold, silver, and stone I started by crafting 25 knives and assigned one pip at a time when making the sigil, the rhinestone, and the final assembly. Next, I crafted a second batch of 25 knives and, this time, I rolled 6 pips for the sigil, the rhinestone, and the final assembly. I recorded the results of each and every roll and summarized the information in the following chart. Diff 25 / One Pip Diff 45 / One Pip Diff 25 / Six Pips Diff 45 / Six Pips Crit Fail 0.3% 2% 0% 7% Failure 5.1% 3% 10% 21% Success 0.0% 34% 21% 32% Moderate Succes 37.7% 27% 21% 18% Good Success 28.7% 15% 21% 14% Great Success 18.7% 10% 19% 4% Amazing Success 9.5% 8% 8% 4% Next, I used this data to run some simulation and "crafted" 10,00 items using each methods and plotted the results. (Note, for convenience I ran a kernel smooth over the results of the simulation which is why the probability density function extends below zero) You can see the plot at https://nofile.io/f/zDBNWuXErkf/Rplot19.jpeg When you look at the chart Red shows a simulation where I rolled one pip at a time Blue shows a simulation where I rolled 6 pips at a time The solid line shows a 25 assembly roll The lines constructed of circles shows a 45 assembly roll Here's what I am taking away I don't see an excessive number of failures On average, for both the component assembly and final assembly, on average one pip assembly produces much better items that rolling all your pips at once. (The center of mass of the distribution is much further to the right) Rolling all your stuff at once has the potential of producing exceptional goods, but this happens very rarely. Most of the time you end up with something mediocre. Personally, I see nothing wrong with this system. Changing things to make the two distributions closer to one another will decrease the set of options available to people. You can create average items if you want And you have the option to be more risky and try to create something exceptional knowing that you might end up with something bad From my perspective, the current round of complaints boils down to "make it easier / less risky for us to produce the best goods". I don't think that this part of the game should be trivialized.
  14. So what? I think that you're way too tied up with the rolls that you get. The actual challenge here - the thing that makes things interesting - is understanding the specifics of the crafting system and doing the best that you can subject to those constraints.
×
×
  • Create New...