|
melon cat posted:Alright. New project, new challenge. =CONCATENATE(IF(E8<F8,"+",""),(F8-E8)/E8) e: Well if you want it to be an actual percentage I guess use =CONCATENATE(IF(E8<F8,"+",""),((F8-E8)/E8) * 100,"%") e2: Don't put the minus sign if you don't want doubles, heh. IF() works pretty much how you'd expect, here's a link: http://www.techonthenet.com/excel/formulas/if.php RICHUNCLEPENNYBAGS fucked around with this message at 03:16 on Jun 24, 2013 |
# ? Jun 24, 2013 03:07 |
|
|
# ? May 10, 2024 08:57 |
|
RICHUNCLEPENNYBAGS posted:=CONCATENATE(IF(E8<F8,"+",""),(F8-E8)/E8) Your Excel-Fu is strong. Thanks for the quick reply. That did the trick. Small thing, though. I tried using this function with some other data, and it returns a really long number, ie. 9.00009009%. I tried formatting the cell to 0 decimal places, but it's still showing the longer version. Any suggested fix for this? And thanks for that link, by the way.
|
# ? Jun 24, 2013 03:42 |
|
melon cat posted:
You can use the ROUND() function to say how many decimal places you want to go to. =CONCATENATE(IF(E8<F8,"+",""),ROUND(((F8-E8)/E8) * 100, 2),"%")
|
# ? Jun 24, 2013 12:27 |
|
This is honestly the most classic Excel mistake. If you want to change the way something is presented, modify the number format, DON'T modify your data. What do you do now if you want to, say, compute the average rate over many years? You can't, because your percentages are now strings. They are just letters. This is why "formatting the cell to 0 decimal places" doesn't work anymore, there are no decimals in the cell, those are just letters. Just put the rate formula in the cell ((F8-E8)/E8) or whatever), then set the number format to "Custom", and put in something like "+0.00%;-0.00%".
|
# ? Jun 24, 2013 14:36 |
|
I'd use a custom format. Something like "+00.00%;-00.00%;00.00%" might do it but I forget the exact syntax.
|
# ? Jun 24, 2013 18:47 |
|
ShimaTetsuo posted:If you want to change the way something is presented, modify the number format, DON'T modify your data. Also, if you change presentation by truncating digits from the cell value, you lose precision. Recovering numbers stored as text is merely a pain in the rear end, but recovering lost precision is actually impossible.
|
# ? Jun 24, 2013 19:01 |
|
ShimaTetsuo posted:This is honestly the most classic Excel mistake. If you want to change the way something is presented, modify the number format, DON'T modify your data. What do you do now if you want to, say, compute the average rate over many years? You can't, because your percentages are now strings. They are just letters. This is why "formatting the cell to 0 decimal places" doesn't work anymore, there are no decimals in the cell, those are just letters. That's probably a better idea. EDIT: Although I tested it and actually you can calculate against a cell filled in with my formula; it will be recognized as a percent. Also you can explicitly cast with the =NUMBERVALUE() function, although it isn't necessary. I suppose you're still correct about lost precision from rounding. RICHUNCLEPENNYBAGS fucked around with this message at 23:35 on Jun 26, 2013 |
# ? Jun 25, 2013 00:42 |
|
I have a time sheet at work, and every day I'll need to put in the projects I've worked on and the number of hours. I want all these projects to populate into one cell in a table on the next sheet, but without duplicates. i.e. Mon - Proj 1, Proj 3 Tue - Proj 4, Proj 5, Proj 11 Wed - Proj 7 Thu - Proj 1, Proj 5 Fri - Proj 11 One cell on next sheet Proj 1, Proj 3, Proj 4, Proj 5, Proj 7, Proj 11 SB35 fucked around with this message at 17:07 on Jun 28, 2013 |
# ? Jun 28, 2013 16:55 |
|
SB35 posted:I have a time sheet at work, and every day I'll need to put in the projects I've worked on and the number of hours. This is the kind of thing pivot tables were invented for. Just use a pivot table.
|
# ? Jun 30, 2013 16:48 |
|
I have the following data table: where the data points are arranged for each mouse over time in rows. How can I rearrange the tables quickly and not by hand so that the data points are arranged for each mouse over time in columns instead, so that the labels Mouse 1-8 are in cells A2:H2, and the time point labels are in cells A2:A7?
|
# ? Jul 2, 2013 19:35 |
|
regularizer posted:I have the following data table: Copy -> Paste -> Transpose
|
# ? Jul 2, 2013 19:47 |
|
Thank!
|
# ? Jul 2, 2013 19:53 |
|
regularizer posted:T FTFY
|
# ? Jul 2, 2013 20:15 |
|
I'm really bad at Excel, and this is probably easy to solve. I have a set of data that looks similar tocode:
John Capslocke fucked around with this message at 20:02 on Jul 5, 2013 |
# ? Jul 5, 2013 19:59 |
|
sholin posted:I'm really bad at Excel, and this is probably easy to solve. I have a set of data that looks similar to I assume you meant a count of companies instead of sum. Use =countifs() http://office.microsoft.com/en-us/excel-help/countifs-function-HA010047494.aspx There is a =sumifs() with similar syntax http://office.microsoft.com/en-us/excel-help/sumifs-function-HA010047504.aspx
|
# ? Jul 5, 2013 20:07 |
|
Old James posted:I assume you meant a count of companies instead of sum. Use =countifs() http://office.microsoft.com/en-us/excel-help/countifs-function-HA010047494.aspx You're right, I meant count, and countifs was exactly what I was looking for, thanks a bunch.
|
# ? Jul 5, 2013 20:20 |
|
Prepare for the worst flowchart in history. Okay, so this is what I came up with for my current project, where we have to optimize shift RPMs to find the lowest race time with a bunch of different parameters. I'm 100% sure that the logic of this flowchart is solid (worked through by the instructor), but I'm just having a hell of a time getting my logic into code (i.e., loving nested loops). I have the gear ratios in an array, which is how I switch though them. So I figure it's something like this: code:
Probably shouldn't have taken a programming class in a 6-week summer session without any prior experience. Edit: I'm like 99.99% sure I have the syntax nailed down, so that shouldn't be a problem. Target Practice fucked around with this message at 01:12 on Jul 6, 2013 |
# ? Jul 6, 2013 00:56 |
|
I can't really understand your chart or your problem. But generally if you think you need 10 levels of nesting you aren't going about the problem in the best way.
|
# ? Jul 6, 2013 03:50 |
|
RICHUNCLEPENNYBAGS posted:I can't really understand your chart or your problem. But generally if you think you need 10 levels of nesting you aren't going about the problem in the best way. Well I'm back with a good chart and code! Alright, so here's what I got to handle selecting a shift rpm to test, running the calculations, then if the engine rpm is at or above the shift RPM we're testing, it shifts to the next gear. When the max gear is hit, it starts again with the next shift rpm. I think. code:
Edit: That EngineTorque() is a function I wrote based of of given data that relates the engine's torque output to it's RPM. That seems to work correctly, so I didn't include it. Target Practice fucked around with this message at 19:37 on Jul 7, 2013 |
# ? Jul 7, 2013 19:28 |
|
RICHUNCLEPENNYBAGS posted:I can't really understand your chart or your problem. Seriously! Can you abstract away the details and reformulate your problem? What I am getting from your posts is that you are trying to minimize a function via a grid search. However, since the code snippet you posted is incomplete, it's very difficult to figure out exactly what is going on (what the inputs are, if you declared all your variables properly, if you have special compiler options on, if maybe it's your "output" routine that's given you the wrong thing but the calculations are correct...). Assuming that I've correctly understood the basic problem, I would: 1. Make the objective function (essentially your inner loop) a separate function. On sample input, does the function return the correct output (the total race time or whatever)? 2. Once that's done, test the optimization algorithm (the outer loop) by running it in a case where you already know the answer (just one gear, maybe?). Does this work correctly, at least? 3. Test the input and output routines separately from the computations (i.e. put a breakpoint in the code after the computations are complete, and inspect the values of the variables there and make sure they match what eventually comes out on your spreadsheet). Hopefully that helps you find out where the problem is.
|
# ? Jul 8, 2013 01:30 |
I would check the inputs/outputs if the first and second iteration works. Just from the code you posted, I don't see why it would just stop working correcting for the third. Also, if you didn't know you can use F8 to go through this step by step and see what the variables are throughout.
|
|
# ? Jul 8, 2013 17:13 |
|
I've got a spreadsheet where everything is separated into monthly tabs. Each month a similar set of data is put in, and I've been asked to do a check and see whether there are any repeats going across the month, in a 30, 60 and 90 day repeat column. I've got to lookup and match across the first two columns, A, which is the unique value, and B, which is a problem associated with it. A can repeat in each month because there can be more than one problem. code:
=IF((MATCH(A3:B3,'TAB1 - June'!A:B,0),YES,"") But that's not working, and I've got no loving clue how to get a 60 or 90 day repeat working in the same way.
|
# ? Jul 9, 2013 16:55 |
|
edit: Disregard this post.
RICHUNCLEPENNYBAGS fucked around with this message at 23:30 on Jul 10, 2013 |
# ? Jul 10, 2013 04:52 |
|
I don't think you can match a range (like A3:B3 won't work). Is there only going to be one unique combination of A and B per month (if at all)? e1: Ok so I'm using my novice excel skills here.. hopefully someone will come in with a better answer. I think you can make it work if you create rows that concatenate the names and problems together.. that'll give you something you can match easily. If it finds a match, it returns the relative position in the range that it is matching against.. otherwise you get an #N/A error. So you could do something like (pardon the pseudocode for now): IF(ISERROR(MATCH(name&problemJuly,name&problemJune,0))," ", "YES"). But concatenating seems pretty janky, I'm sure there is a more elegant solution. MANY EDITS LATER: Ok here's what I just came up with.. The formula in the bar is in column F.. To do 60 or 90 days, I'm not sure exactly what you mean (like if it happened 30,60, AND 90 days ago, etc), but you could implement AND() or OR() to give multiple conditions to come to a TRUE or FALSE result.. OR(July matches June, July matches May, July matches April).. something like that? Hope this helps until the big boys come in. Turkeybone fucked around with this message at 06:12 on Jul 10, 2013 |
# ? Jul 10, 2013 05:02 |
|
Turkeybone posted:I don't think you can match a range (like A3:B3 won't work). Is there only going to be one unique combination of A and B per month (if at all)? This works really well man, thanks. e: Run into a new problem. The formula; =IF(MATCH(S4,'Oct-13'!S:S,0),"YES","") works fine, but will match two blank cells and return a YES. Does anybody know how I'd get it to ignore blank cells? I'm trying; =IF(ISBLANK(S4),"",(MATCH(S4,'Oct-13'!S:S,0),"YES","") but that's not working S4 has a formula in it, which is =CONCATENATE(A3,B3) Total Meatlove fucked around with this message at 14:56 on Jul 11, 2013 |
# ? Jul 11, 2013 09:32 |
|
If you have 2007 or newer look up COUNTIFS and if that is >0 you can put Yes, which should be cleaner than concatenation. Your cell might be an empty string so instead of ISBLANK you may test of it =""
|
# ? Jul 11, 2013 15:09 |
|
poo poo, is COUNTIF only on 2007 and later? e: I managed it somehow; =IF(S3="","",(IF(MATCH(S3,'Nov-13'!S:S,0),"YES",""))) will ignore blank cells and works properly. Total Meatlove fucked around with this message at 16:19 on Jul 11, 2013 |
# ? Jul 11, 2013 15:11 |
|
Total Meatlove posted:poo poo, is COUNTIF only on 2007 and later? Pre-2007 only =countif() works. It let's you count on a single criteria. After 2007 =countifs() was added to let you use multiple criteria.
|
# ? Jul 11, 2013 22:29 |
|
I need macro help for Excel 2010 (and by help, I mean please do this for me because I have no loving clue at all). What I have is a database of document ID's, some of which are attached to other documents as illustrated by the ParentID field (eg SAM001.002 is attached to SAM001.001 in spreadsheet attached). I want to remove some documents from the set as they are irrelevant and renumber the documents (new docID in the ProdNo field) and update the ParentID field with the new document ID's if applicable. Rough algorithm foloows: code:
https://docs.google.com/spreadsheet/ccc?key=0AmEPA_fKjqmMdEdoOHVyQk9CTzVfYnZrcnFLaWlYWHc&usp=sharing
|
# ? Jul 11, 2013 22:52 |
|
The vlookup below will work with the caveat that I don't know how you deal with attachments whose parent files do not have ProdNo values. For example, SAM001.006, which has a new ProdNo of FIN001.003 and an original ParentId of SAM001.005, would be kept with a NewParID, except SAM001.005 does not have a ProdNo. In cell D2 type: =IFERROR(VLOOKUP(C2,$A$2:$B$25,2,),"") Drag it down as far as you have populated data. This will populate the NewParID with values of 0 for the cases noted in the intro paragraph. This will work as long as you do not have repeated Docid entries.
|
# ? Jul 11, 2013 23:07 |
|
Holy balls, thank you very very much.
|
# ? Jul 11, 2013 23:36 |
|
I have a pivot table and chart that I'm using for reporting on our section's Mean Time To Resolve IT Incidents. The pivot table is being fed data from our Service Desk application. Results in: I want to draw one (or possibly more) bold horizontal lines (or a colored box, perhaps?) to mark what the "SLA time" is in our Service Desk Tool (8 hours), to illustrate that the bulk of our resolution times are WAY higher than the "SLA Time". The kicker is that we're using this as part of a Balanced Scorecard initiative, it's being posted on a Sharepoint site using Excel Services, so that may limit some options, since it's got to display the graph right on the BSC intranet site during the monthly meeting. What's the best way to go about this? I would also be interested in figuring out how to add lines to the Data Table at the bottom that show the Incident Count for each Priority for the month, without necessarily showing the count as a line-chart or similar above. Kenny Rogers fucked around with this message at 23:42 on Jul 11, 2013 |
# ? Jul 11, 2013 23:36 |
|
Kenny Rogers posted:I have a pivot table and chart that I'm using for reporting on our section's Mean Time To Resolve IT Incidents. The pivot table is being fed data from our Service Desk application. Add a column to the source data called something like "Goal" and put a value of 8 in each cell. Then add it to the pivot table as a value "Average of Goal" as a line chart. That should draw a solid line across your pivot table and it will stay relative to your scale.
|
# ? Jul 12, 2013 01:41 |
|
Old James posted:Add a column to the source data called something like "Goal" and put a value of 8 in each cell. Then add it to the pivot table as a value "Average of Goal" as a line chart. That should draw a solid line across your pivot table and it will stay relative to your scale. Is there a relatively painless way to, I dunno, mirror the Pivot Table data to a regular table on a tab in an altogether separate workbook and add the column there? Having them continue to update if I make changes to the Pivot Table workbook is crucial. I've got ~30+ tabs/tables at this point. Two for each section (Service Desk, Application Support, Application Development, Asset Management, PMO, etc.) and two for each Business Service (Emergency Notification, Maximo, etc.). I don't want to maintain them by hand, beyond clicking selectors in a Slicer in the Pivot Table Workbook if I can avoid it.
|
# ? Jul 12, 2013 01:59 |
|
You don't have to actually add it to the source data. A similar solution would be to add a calculated field whose "formula" is just "=8". You could also just build it in to the SQL query that feeds the pivot table: SELECT *, 8 AS SLA_Time FROM TABLE, and so on. Much less of a headache than getting all the data out and into another table. However, the chart you are suggesting doesn't really show what you want to show ("the bulk of our resolution times are WAY higher than the SLA Time"). You should actually compute the proportion of recorded outcomes which are greater than 8 hours, and report that (the empirical probability of taking longer than 8 hours). If you just show the mean, it's possible that most of the time you are indeed under 8 hours, but that there was a single huge meltdown that took forever, which doesn't sound nearly as problematic as always being over.
|
# ? Jul 12, 2013 11:47 |
|
ShimaTetsuo posted:You don't have to actually add it to the source data. A similar solution would be to add a calculated field whose "formula" is just "=8". You could also just build it in to the SQL query that feeds the pivot table: SELECT *, 8 AS SLA_Time FROM TABLE, and so on. Much less of a headache than getting all the data out and into another table. Not only are they unattainable, but that *we don't want to* attain them. (off thread topic, but exposition on why...) For Priority 3 incidents (of all sorts) across our organization, our SD tool has a "repair time" of 8 hours. Our actual Mean Time To Resolve (from the time it's assigned to a tech, until that tech marks the issue as resolved) is 54 hours. (for P2, it's 4 hours, and for P1's it's 2. We blow those, too) Why don't we care? Because our Customer Service Satisfaction Rating (The people who answered "Satisfied" or "Very Satisfied" with the service we provide on the most recent (5/13) annual customer satisfaction survey) is 85%. In 2009, when we started implementing/managing by a Balanced Scorecard, our Customer Service Satisfaction Rating was 65%. So, the MTTR that we have meets or exceeds the customer's expectation - and Customer Satisfaction is the yardstick by which we measure our performance. We can put a lot of effort into raising that 85% to an 86%, or we can figure out what the 3% that answered "Unsatisfied" or "Very Unsatisfied" need to get them out of the basement by next year, and do that (which is probably not nearly as much effort). But we have a couple of (on topic) You may have gathered that our IT department is fairly large at this point. I don't have access to the SQL, or I'd go that route. We're pulling the info out of our SD tool into a farm of various flavors of SQL Server Reporting Services (Not the SME on that, I "gotta guy" who manages all that), and I'm given a link to tap that as an external data source in Excel. If I can manually add a column to the Pivot data in Excel, that'll work. If I can copy/refer to the data in a normal table in another workbook, etc, that'll work. I'm thinking I'll just have to get into a couple shouting matches over it though. Kenny Rogers fucked around with this message at 08:03 on Jul 13, 2013 |
# ? Jul 13, 2013 07:58 |
|
I am not a business person so I'm not exactly familiar with all of these terms, but I am a statistician and I can tell you that "MTTR" is not a good metric and it does not show what you want it to show. Consider the following hypothetical situation: -You have 100 clients, each one reports one incident throughout the year. -The first incident comes in on January 1st and is really really bad. It takes the whole year to fix. Let's say 40 hours/week x 50 weeks = 2000 hours. This guy is mad as hell. -All other incidents take 1 hour to fix, and these guys are super happy. Conclusion: your MTTR over the year is 21 hours, which is terrible. But, 99% of incidents were solved in less than 8 hours, no problem. The customer satisfaction rating is 99%, but not because your customers are satisfied with a MTTR of 21 hours...it's because most of them DIDN'T experience that long a wait at all! This is why you HAVE to report the proportion of incidents solved in more than 8 hours, which is exactly what you are trying to say (e.g. if 85% of incidents are solved in > 8 hours, then 8 hours is not a reasonable time to expect). If "most" customers wait longer than the SLA time, and "most" customers are happy, THEN you can say that that time is meaningless. This is really a question of implicitly assuming normality to infer probability statements from the mean/standard deviation of a distribution. It is extremely common in a lot of industries, but it often makes no sense. In your case, there is no way the resolve time distribution is normal: it is strictly positive, and probably multi-modal and heavily skewed to the right. You can't get to "most people wait this long" from the mean, at all.
|
# ? Jul 13, 2013 18:04 |
|
ShimaTetsuo posted:I am not a business person so I'm not exactly familiar with all of these terms, but I am a statistician and I can tell you that "MTTR" is not a good metric and it does not show what you want it to show. A) We have 6 different priorities. P1-P3 are break-fixes of varying severity from "Full Stop for the entire organization" to "One user is unable to function, or several users/a business unit are experiencing minor slowdowns". P4-5 are Service Requests (New functionality requested). We have an open-ended priority for Projects that are initiated through our Service Desk. So, the example of one ticket skewing the rest was probably assigned the wrong priority, or (see below) was resolved inappropriately. B) We have dozens-to-hundreds of tickets going through each section per month. Enough volume that one outlier doesn't skew the numbers an unacceptable amount. Internally, each manager gets a separate report that shows the top 10 highest TTR tickets, too. We've talked about using Median Time To Resolve, but it's a big organization, and I've found that it takes time for people to understand and then get comfortable with doing things differently than we were before. There's a limit to how fast we can implement some of the best practices. For now, for the maturity level in Balanced Scorecard that we're at, the way we're calculating is "good enough", considering that we're still fighting issues like Data Center Engineering resolving the customer's issue, then holding the ticket in "In Progress" until they find the root cause, which may be hours, days or weeks after they implement a 5 minute resolution that gets the customer back in service. In IT Service Managment-speak, they're conflating "Incident Resolution" with "Problem Managment". But none of that is relates to an excel question, so as interesting as I find the process and the journey to be, I'll shut up now.
|
# ? Jul 14, 2013 00:57 |
|
Kenny Rogers posted:There are some other factors that help us get closer to having numbers that are close enough. I work for a company that provides remote management services for a variety of customers. The contracts require us to represent our time to resolve in different ways depending on the customer so we use: % in service level, mean time to resolve, mean time to resolve (excluding outliers over X hundred hours), and median time to resolve. These are all different ways to deal with a few statistical anomalies skewing the numbers and coming up with a value the customer can wrap their heads around.
|
# ? Jul 14, 2013 13:29 |
|
|
# ? May 10, 2024 08:57 |
|
Old James posted:I work for a company that provides remote management services for a variety of customers. The contracts require us to represent our time to resolve in different ways depending on the customer so we use: % in service level, mean time to resolve, mean time to resolve (excluding outliers over X hundred hours), and median time to resolve. These are all different ways to deal with a few statistical anomalies skewing the numbers and coming up with a value the customer can wrap their heads around. We're aiming to get there, but at the moment we're at a maturity point where we're trying to get the Senior Network Admin to understand that Management doesn't care about uptime in percent (99.997%, including scheduled maintenance windows), which is what the tool they're using reports for one of their metrics, but that they should be reporting downtime in minutes and that they shouldn't count downtime during maintenance windows, because those are advertised, and the customer expects downtime during (all of) them, so they're not actually an outage. On the bright side, we've gotten more done in the last 6 months since I took on this technical liaison role than we'd had in the last couple of years. I think we're going to be in pretty good shape this time next year.
|
# ? Jul 14, 2013 20:47 |