@mitchel stangl said:
These are example a of CD done with Layout:
[attachment=0:m5hsn670]<!-- ia0 -->0811-GA-example-1.png<!-- ia0 -->[/attachment:m5hsn670]
So if I understand your drawing, the mouse enters the trap......
@mitchel stangl said:
These are example a of CD done with Layout:
[attachment=0:m5hsn670]<!-- ia0 -->0811-GA-example-1.png<!-- ia0 -->[/attachment:m5hsn670]
So if I understand your drawing, the mouse enters the trap......
@dale said:
Looked up Alan Kay. I guess among many other things, he's one of the Xerox team we can thank for the Mac (and in a sense because of the success of that graphical interface, Windows also).
Oh and so much more. The whole field of Object Oriented Programming for example (yes I know about Simula and List, but Smalltalk was the first fully OOP system and is still the only one good enough to be worth the effort of critiquing) and a non-trivial part of the entire portable computing idiom (his 1979 PhD thesis covered the design, building, programming and documenting of a portable personal computer. In 1979)
@dale said:
I ran across something, I believe it was in "Wired" on his idea that potentially coding can be reduced from thousand of lines to a few and achieve the same results, and his interest in computers that learn. Not only is he a significant contributor the the computing field but a real visionary.
That's part of the power of OOP, when done right. You only program the parts that are different to what you already have code for.
@dale said:
You run in a good crowd. (think he'd be interested in a little Ruby work on the side
)
Don't think so; Ruby has it's place, but it isn't in the same league as Smalltalk. Not even close. Take a look at http://www.vpri.org/index.html for what he's up to right now.
@dale said:
So there is really no way to anticipate how to prepare for the future?
Perfect straight line - there is an old aphorism relating to this that was coined by an old friend of mine by the name of ALan Kay (look him up - major, major, contributor to the modern world of computing)
"The best way to predict the future is to create it"
And possibly the biggest single driver of the hardware world these days is gaming. Anything that will make games run faster will get researched and developed and sold cheaply. And porn. The entire point of building out a global broadband system was to distribute porn, it's just an inevitable result of any new technology. As soon as paint was invented, porn. Telegraph, porn. Cameras - porn.
So, I think that apps like SU will benefit massively from these two drives since games need ways to create the worlds and characters in them and porn needs... bandwidth (which means storage and memory and speed generally) and display quality and hands free UI interaction
@dale said:
Interesting. I just read up a bit on Moore's law, and for anyone whom like me who doesn't know anything about it, it is an observation made by Gordon Moore, co-founder if Intel in 1965 that the number of transistors per square inch of integrated circuit had doubled every two years since the integrated circuit was invented. His prediction was that this would continue. (He has since apparently altered it to Every 18 months).
That's the guy. Interesting how a simple observation and hypothesis that it might continue for a while turns into a law that the whole industry works frantically to keep going. For many years people assumed that it meant that computer speed would double each time. Not necessarily.
@dale said:
What I found most interesting, beside the heat limitation is there appears to be two other limits. Please correct me if I'm wrong.
The way that these transistors and it seems even the way multi cores work are in parallel. So the workload is split in half.
Not completely correct but close enough unless you really want to study computer architecture and implementation technology. Which you probably don't.
@dale said:
However in one of the articles I read the author pointed out that most computer operations are sequential, and I think this really applies to rendering algorithms. His example is of a spreadsheet, in that a cell in a spreadsheet relies on the value of another cell which in turn relies on the value of another cell.
Using this as an analogy for a a shot photon in rendering, in that, the result of the photons action relies on the surface characteristics of the object it strikes, this quickly becomes sequential, and the calculations must be exponentially more complex.
Yup. Most algorithms are sequential and that's partly innate and partly a result of a long history of single, sequential CPUs. Why would people spend time on parallel algorithms if there are no parallel machines to run them on! It's also damn hard. There are lots of problems with interlocks - I need this result before I can do this bit, just like a very big and complex building project. Imagine planning the build of an entire nation in a single project. Every detail, every purchase, every single nut and bolt and dab of loctite.
@dale said:
... does OpenCL seem like a reasonable or at least short term answer? (in the context of rendering anyway) Using both the power of the GPU and CPU, or am I misunderstanding (which is highly possible) this.
Probably OpenCL has value; mostly because making it possible for people outside the fairly small number that normally work on the software for graphics card drivers to actually do things with the GPUs at least offers a chance of some exciting new uses and algorithms to appear. Whether it will last is another matter - I have a suspicion that if and when massively parallel systems become the norm we will see GPUs pretty much disappear and just use a few (dozen/hundred) of the pool of CPUs to do the rendering work. Then again, maybe some of the brain structure research will lead to us building systems with thousands of CPUs of dozens of different types each specialised to different jobs. Why not imagine processors specialised for doing sound synthesis (we have DSPs that do some of that now), for polygon to pixel conversion, for floating point work, for string manipulation, for data moving, for watching user input signals and so on. We're used to thinking of a CPU being able to do all of that because it is cheaper with current designs and production technology. It probably won't always be like that.
@honoluludesktop said:
Not sure if this was addressed, but 64 bit means more addressable memory, but speed is a function of the bus width?
A 64 bit word cpu can easily use all 64 bits to address memory if the system is set up to allow all 64 bits on the address bus. More typically I suspect that rather less address range it provided on current actual machines since 16 exabytes of memory is still a tad unwieldy. The memory data bus can be of various widths. It would be surprising to find a 32bit data bus in a machine with a 64 bit CPU, but it could be made to function. In the old days it was common to have an 8bit wide data bus even with a 16 (or even 32) bit cpu. Some machines have 128bit data busses since that allows filling the CPU caches even faster.
Key thought - they're related but not dependant on each other. You could build an 8bit CPU that was able to use 48bit addresses and had a 256bit wide data bus. You just wouldn't bother these days.
@honoluludesktop said:
Doesn't dedicated graphics memory still need to be addressed by the computer, thus the OS. I read elsewhere that if you have 2G. on a graphic card, that this amount of memory addresses are not be available to the CPU, a big chunk of the memory in a 32 bit system. Is that right, since the graphics card still needs to run conventional programs? Is the CPU slower as in clock cycles?
Not in any system I am familiar with. Dedicated graphics card memory isn't normally part of the CPUs address space at all but is used for texture bitmaps, drawing lists, workspace for the GPU algorithms etc. You'd typically find the icons and backdrop for your screen in there too. Shared graphics memory (often referred to as 'integrated') does exist in the CPU address space and certainly results in a slower overall system. It's cheaper and that is why it gets used. Of course, if your system is fast, integrated graphics may well not result in any noticable slowdown and be considered adequate. The hardware world is forever cycling between separate and integrated graphics systems as the technology for each changes and the balance moves one way or the other.
@honoluludesktop said:
I remember when the computer bus was ISA(?), and special graphic cards were made so that Cad programs could display faster. Isn't this the same thing? As the cost of multi-core comes down, won't this kind of card have less value?
That's pretty much the typical state now except that the old ISA bus went away decades ago and now it's what, PCIe or PCIx or... whatever.
Multi-core is almost certainly how it will have to be for the future. Although Moore's Law is in no current danger of failing we have reached a point where making the cycle time of CPUs faster is a real problem. At about the place where we are now you find that the power density - the amount of heat released per square millimetre in this case - gets to where there is no practical cooling fluid that can take away that heat fast enough. So you have to think of another way to use your increasing number of transistors (see Moore's Law above) to improve overall performance. Large caches have been a typical use for ages since main memory simply isn't anywhere near fast enough to supply all the data modern CPU can chew on per nano-second. Dual/quad core CPUs are a fairly cheap and nasty way of improving things a little because they still only have the single main memory and bus. You rely utterly on the caches to keep it all running.
Ironically, back in the dawn of time (well, 1980-ish) there was a serious attempt to solve this problem in the days of 1MHz being the high performance mark. The Transputer was a cpu that was intended to have a little memory connected but a lot of other Transputers to talk to. I had a 128-cpu machine in my office back when I was an IBM Research fellow working on interactive solid CAD system in '84. These days I suspect you could fit 128 Transputer-like CPUs and a few MB of memory for each one onto a single chip. The practical problems were two-fold -
a) parallel programming is hard and programmers are lazy. So everyone wanted to avoid thinking about it.
b) Intel had their 8086 system and a lot of money and wanted to crush everyone else. They threw many, many billions at the problems and made single CPUs run faster and faster.
Result - the Transputer went away and intel won the war for the desktop CPU. Now we need to go back to the issue of massively parallel systems. It's what, 5 years since 3GHz cpus appeared and we're still stuck there. Dual cores and even 8-core machines simply haven't improved performance that much - we would hope for about 8 fold improvement in that time. We arguably should have seen a change to hundred+ core machines and 20+ fold improvement in that time.
@gaieus said:
There will never be a SU version without bugs.
Old software geek's saying -
The software is only finished when the last customer dies
@solo said:
I realised half my day to day software was not Mac compatible,
If there is really a crucial app that only runs on windows, run it under vmware etc. that way when the windows partition gets virused you just start a copy of the VM setup. I've not discovered any apps that make me want to do that; I tried DoubleCAD XT and really, really, don't like it.
@solo said:
SU was very hard to get used to, the video card that came with the Imac was pathetic,
Mine has a 4850 with half a gig of ram, drives the 24" monitor and a second 20" panel perfectly well.
@solo said:
the ram expansion was limited not to mention overpriced,
Mine came with 2Gb but I spent a whole $100 for 4Gb and used the original in my macbook. New ones come with 4Gb and can take 16Gb.
@solo said:
and it hated my network and would jump to my second network (kids monitored one) whenever it felt like a change,
Not heard of that problem before. Perhaps it didn't like to associate with nasty windows machines?
@solo said:
not forgetting the mouse that came with it which had a nipple and zoomed everywhere when you touched it or would show the desktop gadgets on the slightest finger slip
you can trivially turn off the side buttons and the scrollball is actually extremely useful when orbiting and zipping around a model.
@solo said:
oh if you like a tidy desktop, then you need to clean it regularly as everything lands there by default
No, actually it doesn't. Or at least not on any of mine. Downloads go where you set it to put them. Saved models etc go where you tell them to.
We should all use tools that make us feel comfortable and capable and clearly Macs don't do it for you. But it's smart to choose tools based on actual facts rather than rumours or other people's issues or what one reads in PCWorld etc. I've used windows machines (along with assorted unices, linuxes, weird british machines, mainframes, embedded OSs and even a couple I helped to write) all the way back to 3.0 and I just don't have the patience to deal with the annoyances anymore. My work is no longer to fight the OS but to use various tools for a different level of work. OS X is the least annoying of the available choices and so that's what I use.
It's nothing revolutionary; slightly faster cpu than the palm pre/iphone3gs which is totally unsurprising 6 months later, rather odd trackball input, very ho-hum physical design, a screen that is technologically interesting but apparently awful in bright light, etc etc. Certainly nothing to make me feel like replacing my old iPhone any time soon. Come July there will likely be a new iPhone model which will likely leapfrog this cpu and of course then a newer Nexus-whatever which will leapfrog that ad infinitum.
The good news is that high performance portable data devices are now well entrenched in the market and the cellphone companies are damn well going to have to make their networks cope with them or lose business. Eventually there will be a huge range of superphones to suit pretty much anyone, from the current iPhone fan to the die-hard commandline aficionado, from the untechiest of grandparent to the k0001357k1d2.
They're all ARM powered of course. Did you know that ARM expect to sell 6 billion cpus this year? 4 billion last year? It is claimed that there are more ARM cpus around than all the other types combined (I'm assuming restricted to 32 bit devices since about eleventy-twelve gazillion 8bit uCs have been sold). We never expected that when we started the project in the late '80s.
Short version of politics -
Extreme left: my friends and I should be in charge, and rich.
Extreme right: my friends and I are rich, and should be in charge.
End results are often startlingly similar.
Nice drawings Paul. I just don't understand people that try to tell me SketchUP can't do 'real' drawings. It's simply a matter of getting some practice with making proper use of the tools. You might possibly be able to argue that more powerful tools are need in SU/LO and that's fine, but I've done the major drawings for our new house entirely in SU/LO with no real problems other than hitting a few interesting bugs.
I do look forward to the dimensioning in LO getting better (it's good enough to be worth criticising right now) and to the inclusion of some better tools for making section views better looking without quite so much work.
@escapeartist said:
Thought that was photochopped, guess not...
As a general point, never trust anything you read in the Daily Fail - not even the date on the masthead. In fact, avoid all 'red tops' and treat the Daily Express with disgust, the Telegraph with disdain (except for the motoring section for some reason) and the Times with minimal trust.
@mike lucey said:
I saw something on Discovery a few days ago that concerned me. Some guy in the USA, it sounded like 'Ray Kurtfield' but I could not source the guy.
Ray Kurzweil. See http://en.wikipedia.org/wiki/Raymond_Kurzweil
Most of the people I know that know him think he's a bit nuts.
Well, well.
One of the components had managed to get a definition (the component name) of '' ie blank. This makes sense of the crashlog appearing to have a problem in some code relating to adding a symbol.
So the immediate solution is to provide a proper name for the component, which is easy enough. But the more serious problem is that the exporter shouldn't crash that badly, the integrity checker that checks the model ought to have picked it up sooner and warned me and perhaps even more importantly there should be no way you can delete the name of a component. You can't have duplicate names and you should be able to enter any other sort of invalid name.
I've noticed that the keyboard focus occasionally seems to move to the component name field of the entity window or the outliner window and then using a hotkey (like 'm' for example) results in you having a component called m. Easy to miss when working away. Shouldn't happen.
If Barry or any other googleSU person is reading and interested I can supply models with and without the error.
Progress report for anyone interested - it's the roof component. With that layer removed and the model purged, I can export to dwg. Delete every other layer, purge, save to a new file and try to export - boom!!!
So I wonder what is wrong with the roof that upsets export but not LO or any of the other tools I normally use? More on this channel as the news breaks, later.
Hmm, interesting results folks. Of course the irony is that the 'elevations' scene is really the one most needed since it is the entire model of the house!
I wonder what has changed in the last three months that could be causing such a problem. Not that there is any meaningful way to automatically compare two versions of a model.... most of the recent changes that I recall were minor alterations in the details of dimensions (changing size of ICF blocks, moving the odd rafter, stuff like that) rather than major additions.
Thanks for testing it guys. The really annoying thing is that many other models export perfectly well, including a 3 month old version of the house. So somewhere in there is a change that has triggered a bug in the exporter code. Sigh. It exports to collada ok, to Layout.... bummer.
@thomthom said:
Have you purged the model?
Have you tried copying the content into another model?
Yes and yes. And I tried deleting any textures and then purging, just in case it was something to do with large texture bitmaps. No luck.
Just tried to export my house model as dwg (various version, also tried dxf etc) and it simply blows SU away every time.
Crash log attached, model at http://www.rowledge.org/tim/AshlingHouse/Ashling%20Rd%20Main%2012%20Nov.skp
SketchUp_2009-12-09-162219_Diziet.txt
I can export other models ok, so there is presumably something about the model that upsets the exporter plugin. If anyone knows the answer - or can actually send me the exported file - that would be nice since somebody actually needs it soon!