Title: Power Efficient Software Coding Id: 61882, Count: 1 Tags: Answers: 17 AcceptedAnswer: null Created: 2008-09-15 05:06:06.0 Body:

In a typical handheld/portable embedded system device Battery life is a major concern in design of H/W, S/W and the features the device can support. From the Software programming perspective, one is aware of MIPS, Memory(Data and Program) optimized code. I am aware of the H/W Deep sleep mode, Standby mode that are used to clock the hardware at lower Cycles or turn of the clock entirel to some unused circutis to save power, but i am looking for some ideas from that point of view:

Wherein my code is running and it needs to keep executing, given this how can I write the code "power" efficiently so as to consume minimum watts?

Are there any special programming constructs, data structures, control structures which i should look at to achieve minimum power consumption for a given functionality.

Are there any s/w high level design considerations which one should keep in mind at time of code structure design, or during low level design to make the code as power efficient(Least power consuming) as possible?

Popularity: 88.0 Answer #61889, count #1, created: 2008-09-15 05:17:40.0

Do not poll. Use events and other OS primitives to wait for notifiable occurrences. Polling ensures that the CPU will stay active and use more battery life.

Answer #61903, count #2, created: 2008-09-15 05:29:13.0

Consider using the network interfaces the least you can. You might want to gather information and send it out in bursts instead of constantly send it.

Answer #61911, count #3, created: 2008-09-15 05:37:13.0

Zeroith, use a fully static machine that can stop when idle. You can't beat zero Hz.

First up, switch to a tickless operating system scheduler. Waking up every millisecend or so wastes power. If you can't, consider slowing the scheduler interrupt instead.

Secondly, ensure your idle thread is a power save, wait for next interrupt instruction. You can do this in the sort of under-regulated "userland" most small devices have.

Thirdly, if you have to poll or perform user confidence activities like updating the UI, sleep, do it, and get back to sleep.

Don't trust GUI frameworks that you haven't checked for "sleep and spin" kind of code. Especially the event timer you may be tempted to use for #2.

Block a thread on read instead of polling with select()/epoll()/ WaitForMultipleObjects(). Puts stress on the thread scheuler ( and your brain) but the devices generally do okay. This ends up changing your high-level design a bit; it gets tidier!. A main loop that polls all the things you Might do ends up slow and wasteful on CPU, but does guarantee performance. ( Guaranteed to be slow)

Cache results, lazily create things. Users expect the device to be slow so don't disappoint them. Less running is better. Run as little as you can get away with. Separate threads can be killed off when you stop needing them.

Try to get more memory than you need, then you can insert into more than one hashtable and save ever searching. This is a direct tradeoff if the memory is DRAM.

Look at a realtime-ier system than you think you might need. It saves time (sic) later. They cope better with threading too.

Answer #61912, count #4, created: 2008-09-15 05:42:28.0

And read some other guidelines. ;)

Recently a series of posts called "Optimizing Software Applications for Power", started appearing on Intel Software Blogs. May be of some use for x86 developers.

Answer #61971, count #5, created: 2008-09-15 07:23:54.0

From my work using smart phones, the best way I have found of preserving battery life is to ensure that everything you do not need for your program to function at that specific point is disabled.

For example, only switch Bluetooth on when you need it, similarly the phone capabilities, turn the screen brightness down when it isn't needed, turn the volume down, etc.

The power used by these functions will generally far outweigh the power used by your code.

Answer #62073, count #6, created: 2008-09-15 10:02:32.0

Look at what your compiler generates, particularly for hot areas of code.

Answer #62107, count #7, created: 2008-09-15 10:29:07.0

If you have low priority intermittent operations, don't use specific timers to wake up to deal with them, but deal with when processing other events.

Use logic to avoid stupid scenarios where your app might go to sleep for 10 ms and then have to wake up again for the next event. For the kind of platform mentioned it shouldn't matter if both events are processed at the same time. Having your own timer & callback mechanism might be appropriate for this kind of decision making. The trade off is in code complexity and maintenance vs. likely power savings.

Answer #152989, count #8, created: 2008-09-30 13:19:56.0

Simply put, do as little as possible.

Answer #153049, count #9, created: 2008-09-30 13:39:20.0

also something that is not trivial to do is reduce precision of the mathematical operations, go for the smallest dataset available and if available by your development environment pack data and aggregate operations.

knuth books could give you all the variant of specific algorithms you need to save memory or cpu, or going with reduced precision minimizing the rounding errors

also, spent some time checking for all the embedded device api - for example most symbian phones could do audio encoding via a specialized hardware

Answer #223863, count #10, created: 2008-10-21 22:59:21.0

Well, to the extent that your code can execute entirely in the processor cache, you'll have less bus activity and save power. To the extent that your program is small enough to fit code+data entirely in the cache, you get that benefit "for free". OTOH, if your program is too big, and you can divide your programs into modules that are more or less independent of the other, you might get some power saving by dividing it into separate programs. (I suppose it's also possible to make a toolchain that spreas out related bundles of code and data into cache-sized chunks...)

I suppose that, theoretically, you can save some amount of unnecessary work by reducing the number of pointer dereferencing, and by refactoring your jumps so that the most likely jumps are taken first -- but that's not realistic to do as a programmer.

Transmeta had the idea of letting the machine do some instruction optimization on-the-fly to save power... But that didn't seem to help enough... And look where that got them.

Answer #774264, count #11, created: 2009-04-21 19:37:35.0

To avoid polling is a good suggestion.

A microprocessor's power consumption is roughly proportional to its clock frequency, and to the square of its supply voltage. If you have the possibility to adjust these from software, that could save some power. Also, turning off the parts of the processor that you don't need (e.g. floating-point unit) may help, but this very much depends on your platform. In any case, you need a way to measure the actual power consumption of your processor, so that you can find out what works and what not. Just like speed optimizations, power optimizations need to be carefully profiled.

Answer #875963, count #12, created: 2009-05-18 01:54:22.0

Set unused memory or flash to 0xFF not 0x00. This is certainly true for flash and eeprom, not sure about s or d ram. For the proms there is an inversion so a 0 is stored as a 1 and takes more energy, a 1 is stored as a zero and takes less. This is why you read 0xFFs after erasing a block.

Answer #1464600, count #13, created: 2009-09-23 08:03:31.0

Do your work as quickly as possible, and then go to some idle state waiting for interrupts (or events) to happen. Try to make the code run out of cache with as little external memory traffic as possible.

Answer #1883871, count #14, created: 2009-12-10 20:41:25.0

On Linux, install powertop to see how often which piece of software wakes up the CPU. And follow the various tips that the powertop site links to, some of which are probably applicable to non-Linux, too.

http://www.lesswatts.org/projects/powertop/

Answer #3594138, count #15, created: 2010-08-29 08:50:41.0

Choose efficient algorithms that are quick and have small basic blocks and minimal memory accesses.

Understand the cache size and functional units of your processor.

Don't access memory. Don't use objects or garbage collection or any other high level constructs if they expands your working code or data set outside the available cache. If you know the cache size and associativity, lay out the entire working data set you will need in low power mode and fit it all into the dcache (forget some of the "proper" coding practices that scatter the data around in separate objects or data structures if that causes cache trashing). Same with all the subroutines. Put your working code set all in one module if necessary to stripe it all in the icache. If the processor has multiple levels of cache, try to fit in the lowest level of instruction or data cache possible. Don't use floating point unit or any other instructions that may power up any other optional functional units unless you can make a good case that use of these instructions significantly shortens the time that the CPU is out of sleep mode.

etc.

Answer #11036453, count #16, created: 2012-06-14 15:31:08.0

Rather timely this, article on Hackaday today about measuring power consumption of various commands: Hackaday: the-effect-of-code-on-power-consumption

Aside from that:
- Interrupts are your friends
- Polling / wait() aren't your friends
- Do as little as possible
- make your code as small/efficient as possible
- Turn off as many modules, pins, peripherals as possible in the micro
- Run as slowly as possible
- If the micro has settings for pin drive strengh, slew rate, etc. check them & configure them, the defaults are often full power / max speed.
- returning to the article above, go back and measure the power & see if you can drop it by altering things.

Answer #11050925, count #17, created: 2012-06-15 12:47:27.0

Don't poll, sleep

Avoid using power hungry areas of the chip when possible. For example multipliers are power hungry, if you can shift and add you can save some Joules (as long as you don't do so much shifting and adding that actually the multiplier is a win!)

If you are really serious,l get a power-aware debugger, which can correlate power usage with your source code. Like this

Title: Turn off power saving options via command line Id: 254066, Count: 2 Tags: Answers: 4 AcceptedAnswer: 254131 Created: 2008-10-31 16:14:58.0 Body:

On Windows XP, the following command in a script will prevent any power saving options from being enabled on the PC (monitor sleep, HD sleep, etc.). This is useful for kiosk applications.

powercfg.exe /setactive presentation 

What is the equivalent on Vista?

Popularity: 153.0 Answer #254112, count #1, created: 2008-10-31 16:28:42.0

In Vista you create a power profile and use the commandline powercfg to select that profile see here

Answer #254117, count #2, created: 2008-10-31 16:31:37.0

C:\Windows\system32>powercfg /list

Existing Power Schemes (* Active)

Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e (Balanced) *

Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c (High performance)

Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a (Power saver)

C:\Windows\system32>powercfg /setactive a1841308-3541-4fab-bc81-f71556f20b4a

Answer #254131, count #3, created: 2008-10-31 16:36:11.0

powercfg.exe works a little differently in Vista, and the "presentation" profile isn't included by default (at least on my machine. so you can setup a "presentation" profile and then use the following to get the GUID

powercfg.exe -list 

and the following to set it to that GUID:

powercfg.exe -setactive GUID 

Alternatively, you can use powercfg.exe with the -change or -X to change specific parameters on the current power scheme.

Snippet from "powercfg.exe /?":

-CHANGE, -X Modifies a setting value in the current power scheme.

 Usage: POWERCFG -X <SETTING> <VALUE> <SETTING> Specifies one of the following options: -monitor-timeout-ac <minutes> -monitor-timeout-dc <minutes> -disk-timeout-ac <minutes> -disk-timeout-dc <minutes> -standby-timeout-ac <minutes> -standby-timeout-dc <minutes> -hibernate-timeout-ac <minutes> -hibernate-timeout-dc <minutes> Example: POWERCFG -Change -monitor-timeout-ac 5 This would set the monitor idle timeout value to 5 minutes when on AC power. 
Answer #258777, count #4, created: 2008-11-03 14:28:25.0

Setting a value to never can be done by passing a value of 0 to the -change option, i.e.:

powercfg.exe -change -monitor-timeout-ac 0 

means the monitor timeout will be set to "Never". So the presentation plan can be achieved via:

powercfg.exe -change -monitor-timeout-ac 0 powercfg.exe -change -disk-timeout-ac 0 powercfg.exe -change -standby-timeout-ac 0 powercfg.exe -change -hibernate-timeout-ac 0 
Title: Reducing power consumption on a low-load server running WinXP? Id: 410122, Count: 3 Tags: Answers: 3 AcceptedAnswer: 410188 Created: 2009-01-04 00:02:15.0 Body:

I am using a quad-core windows XP based Dell Machine machine in my office as a server for an application that I am developing and for occasional work with remote desktop connection.

The machine is typically under very light load, running a MySQL server with few connections and a few Java processes that make a database connection every few minutes.

When I log into it or when the occasional user submits a request to the server, there is a need for more processing power.

Is there some way to get it to consume less power but still continue running 24x7? On my Mac Pro, for example, I can (manually) shut down a few CPUs which I've noticed affects power consumption. What are my options on XP?

I realize that this is not directly a programming question, but I'm sure somebody here has a computer running on a similar usage profile.

Popularity: 8.0 Answer #410188, count #1, created: 2009-01-04 00:52:05.0

A good first step is to use the Power tool in Control Panel (powercfg.cpl) to make a custom profile that will power down screen, disks, etc. after a reasonable idle period. Don't make periods too short or re-powering up the device may waste more power than was saved by turning it off; especially important for hard disks.

Disable any services (and possibly devices too) that are not required in your configuration (e.g. Content Indexer, WebClient).

Also make sure that your BIOS settings allow your processor clock to go as slow as possible. Some motherboard device drivers take their settings from the BIOS. Turn off any overclocking software and reset those settings to "auto" or "default".

Use devmgmt.msc check the power settings for each device driver. Some drivers are dumb and don't allow controlling the power to their devices but most USB hosts/hubs do.

Answer #865230, count #2, created: 2009-05-14 19:32:59.0

There are some motherboards from various manufacturers that provide power management tools that can be used from within windows. Perhaps this would be the best place to start. Post more about your system specs and maybe I can be more specific.

Answer #15798640, count #3, created: 2013-04-03 21:38:43.0

I know this is late for this post, but hopefully it helps someone at somepoint in the future...

In response to turning off cores in XP to conserve power, there is a built-in solution from MS. Go to Start, Run and then type msconfig and hit enter. After the system configuration utility window opens, click on the BOOT.INI tab and click the Advanced Options button. You can effectively limit the amount of active cores by checking the box next to "/NUMPROC=" and entering a number in the field to the right. The number that you enter will be the new number of active cores on your processor (after a reboot of course...). Set it to whatever you like and reboot the PC. After the reboot open up task manager and under the Performance tab you should see fewer graphs on the CPU Usage History window (each window represents an active core. check this before you make the initial change and it will show all cores that your system is using. check task manager (after you make the change in msconfig and reboot the PC) again to verify that the change you made in msconfig has taken effect.

Again, I know this is an old post but I figured that someone eventually will come looking at this page and hopefully it will be of some service to them.

Title: Energy efficient application development Id: 422539, Count: 4 Tags: Answers: 29 AcceptedAnswer: 423121 Created: 2009-01-07 22:59:21.0 Body:

Are there any tools and techniques to build more energy efficient applications? Any articles, rules or best practices for green programming?

Popularity: 121.0 Answer #422573, count #1, created: 2009-01-07 23:11:49.0

Interesting question - the first thought that occurs to me is going lighter on CPU is more energy efficient (as the CPU runs cooler and fans don't need to come on - plus on some OSes an idle CPU goes into a more energy efficient state). So eliminating things like busy waits for example, and optimising very CPU intensive algorithms may help.

Similarly anything that can be done to wring more life out of an old server may be more energy efficient in the 'green' sense (not having to build and ship and package a new server, not needing to dispose of the old one, etc).

(I always thought the climate simulations running on grid computers were a bit ironic - I ran one for a while on my Mac and it always made the CPU temp go up, the fans come on, and the power supply run hotter - to say nothing of the CO2 emissions from all that :-)

Answer #422603, count #2, created: 2009-01-07 23:22:07.0

Recently, while preparing a seminar on Design Patterns, I came across an article (can't find it now) that looked at design patterns and, specifically, the Bridge pattern IIRC, in the context of energy efficiency in portable devices. I do not think, however, that many people take this seriously, since we live in a world where to get a more energy-efficient hardware platform, you just wait for one.

If we are talking about battery life and such, then obviously any sort of optimization which makes us use less instructions is better. Also, doing things in RAM instead of swapping to HDD might be more energy-efficient.

Answer #422647, count #3, created: 2009-01-07 23:39:24.0

Borland C++ Builder 5 and 6 had a horrendous inefficiency for many many years that cause compilation to run about 10 times slower than necessary.

A rough estimate of number of programmers (20,000), minutes/day of wasted CPU time (15), PC power consumption (150 W) and lifespan of the bug (5 years) gave me a figure of about 93 Megawatt hours, or 93 tons of CO2

All from one tiny bug...

So, not really an answer, but I think your question is a highly valid one!

Answer #422673, count #4, created: 2009-01-07 23:45:57.0

I'm going to answer your question with my own: Is there enough benefit for the cost of green programming?

My knee jerk reaction is that green programming would violate the 80-20 rule, and we could be getting much better green results elsewhere (such as switching to a processor that draws 20 W instead of 47 W)

Good question nonetheless!

Answer #422683, count #5, created: 2009-01-07 23:50:44.0

I've not come across much at an application level but at the db level there are a class of databases called sensor networks. These are typically battery powered and in remote locations so power consumption is an important factor.

Sensor dbs are usually in a snooze mode and 'wake' when they need to collect data or when they transmit. Without reading the paper about them, this stuff seems very 'embedded' and I've no idea on metrics but I would guess power consumption would be higher when :

  1. thrashing the hard drive
  2. doing heavy mathematical calculations
Answer #422707, count #6, created: 2009-01-08 00:02:41.0

At application level, many times caching is overlooked: if you already have a fresh result, you don't have to recompute it. Less computation -> lower power consumption; also, usually less data transferred (-> lower power consumption - infinitesimal, but it adds up). Of course, caching logic has to be lighter than the computation itself, otherwise it defeats the purpose.

For example, in HTTP this can be done with conditional queries - yet rarely do you see it (it can be tricky to keep track of response freshness).

Answer #422752, count #7, created: 2009-01-08 00:24:52.0

The Less Watts site is a good place to start.

Answer #422817, count #8, created: 2009-01-08 00:51:51.0

At the IPCV 2008 conference, I saw a presentation that included data for calculating the cost of each instruction on a mobile device in units of power (I believe it was picowatts). Given that as a starting point, I imagine you could make an optimizer that reduced cost simultaneously in computation and power. I don't think it's trivial, but it is certainly possible.

Answer #422821, count #9, created: 2009-01-08 00:53:56.0

Eliminate the servers, P2P rules :)

  1. More efficient use of available machines. They have enough storage and networking capacity that it should be simple to get rid of the big iron.

  2. Make sure the backlight of the monitor is out when the machine is not used. Spin down the disk. Not really at application level.

  3. If you take a look at current architectures, processing is happening too far away from memory. Going off-chip takes an awful lot of power. If you just count chips in your PC, you'll notice there are more memory chips (8 or 16) than processing ones (CPU+GPU). It makes sense to move the processing towards the memory, but that means a non-unified memory model, which is difficult to program. Basically, a grid inside your PC.

Answer #422823, count #10, created: 2009-01-08 00:54:47.0

Aside from the absurd solution where you just don't compute (and therefore you don't need the electricity...)

The aim is simply to do the most work with the least amount of energy. Outside of the hardware realm (which you just lower the voltage and frequency), you just have to make sure the work could be done in the least amount of actual execution time.

This is probably a pain to measure because of OoO Execution, etc etc.

I wouldn't lose sleep over it, as long as you use efficient algorithms and such, and not bloat your code unnecessarily.

Answer #422919, count #11, created: 2009-01-08 01:39:32.0

One generalized answer to the question is don't poll for changes or periodically scan for something to do. Use operating system notifications to process information on an interrupt driven basis.

When you do process information do it efficiently and as a 'bulk' endeavour. One big write to HDD is better than several smaller writes as the disk system can switch to lower power states sooner. The same applies to use of CPUs, hardware radios..etc.

While virtually meaningless on most hardware connected to the grid.. platforms such as laptops and especially mobile phones can benefit quite a bit by applications designed to for power effeciency (Use the CPU and system resources as sparingly as possible)

Answer #422966, count #12, created: 2009-01-08 01:59:51.0

An interesting thing to think about - how CO2 do you produce while optimizing your code, and how much CO2 is it going to reduce? I can imagine that developing software is a quite CO2-intensive process... ;)

Answer #423121, count #13, created: 2009-01-08 03:10:35.0

Wow, the possibilities are endless here:

General Ideas:

  • Compile much less frequently. Compiling is CPU intensive, and CPU's are watt hogs
  • Stop running those crazy unit tests all the time.
  • Heck, shut down that Continuous Integration server! Come on, you really only need to build every couple of days anyway, bottom line.
  • Limit checkins to your source control system. Think of all those watts burning down the wire and on the remote server!
  • Stop commenting your code. (Okay, so that should probably be "Don't start commenting your code".....)
  • Save on Pixels. Set your screen resolution to 640x480. Fewer pixels, fewer watts!
  • Code only in APL.
  • Don't code any more bugs. Bugs require more compiles, more testing, more typing, and more CPU time.
  • If your system allows code folding, keep as much code folded as possible. Saves on monitor output.
  • Keep all your caffeinated beverages at room temperature.
  • Unplug your USB Missile Shooter. Totally forget about the humping dog Memory stick.
  • Take the batteries out of that Light Saber behind your desk and use it without the soothing purple glow.
  • Generally, keep all your numbers small. I'd suggest never using big numbers. Big numbers probably use more energy.
  • Get a mouse-ball based mouse. That red light has to be burning watts.

Since keystrokes require energy to process and display:

  • Many languages let you put multiple statements on one line. Do that as much as possible.
  • Pursuant to that, never break long lines of code into multiple lines.
  • Don't refactor. Too often this creates more code that you have to scroll through.
  • Name all variables with as few letters as possible. Preferably one letter. Generally two. Three max.
  • Stop putting blank lines in between lines of code. Saves on Enter keystrokes and scrolling!
  • Shoot, eliminate unnecessary white space altogether. When you get right down to it, what exactly does white space do? Nothing, that's what.

That's all I can think of right now. There has to be more stuff like this! Come on guys, let's put our heads together and really go green!

Answer #423671, count #14, created: 2009-01-08 09:21:07.0

Energy efficiency depends on the hardware and the OS that you are running on, as well as the library, that you are using.

If you are doing application development on a typical PC, then your application will be waiting for system messages most of the time. Typical frameworks (e.g. C#, Java, Delphi, Visual Basic) contain the basic functionality for these messaged to trigger events that your program will react to (e.g. OnClick). If you stick to this way of programming, there is not much that you need to think about. When your application is waiting for system messages, the OS knows, that the application is not busy and can make use of its built in power saving functionality. Things that you might consider are improvements on the IO-part of your program (e.g. combining several file reads or writes into one combined one).

If your application involve a lot of processing (e.g. when you need to create independent threads to handle the processing without freezing the UI) you will need to think more about power efficiency. Optimizing the IO-part may be a good starting point.

Things that you should generally avoid are busy loops polling for an event (e.g. change of a bit in a register, or the availability of data from another thread). Use synchronization functions of your OS (e.g. Events, Semaphores, Mutexes, WaitForSingleObject, ...) instead.

In embedded systems there are many more things to consider ( / under your control).

I am working at the moment on a wireless sensor network, where each node will be able to run on small batteries for years. In order to achieve that, two things had to be done.

1) Design a very power efficient hardware by selecting low power consumption parts with efficient sleep modes. Many Microcontrollers have the possibility to switch off sub systems when not in use. They also have various power reduction or sleep modes. During a sleep mode program execution is stopped, while all RAM is kept. Depending on the specific mode certain sub systems are still alive and will wake up the controller when needed.

2) Program the firmware to make use of low power modes, whenever possible. Switch off sensors / transmitters when not in use. Let the controller sleep as often as possible (e.g. use timers to wake up from programmed delays instead of busy wait delays, when waiting for external events program a wakeup interrupt and go to sleep). As a result of these endeavours your firmware becomes much more complex. There are many more states, that your program maybe in, and error conditions (e.g. timeouts) need to be cought by making use of subsystems.

Answer #423997, count #15, created: 2009-01-08 11:48:58.0

Intel have some Energy Efficient Software Guidelines on their site, and links to a wide range of tools for helping with this.

Looks like a good place to start!

Answer #427628, count #16, created: 2009-01-09 10:30:07.0

Use a native code compiler. Yes, you will have to write code to explicitly free used objects. The benefit is that you save the run time environment from having to run a garbage collector that continuously tries to determine which objects aren't in use anymore.

Also, disable compiler options that waste run-time CPU cycles, like range checking. A well tested program must never violate range constraints, so range checking is only needed during debugging.

Answer #427766, count #17, created: 2009-01-09 11:36:03.0

Energy efficiency in programming is, when the value is maximized and the environmental costs are minimized. In other words, it can be green programming, if your application consumes more computer resources but has a shorter time to market. Simply because the environmental costs of a late product launch can be much bigger.

Answer #429735, count #18, created: 2009-01-09 21:05:24.0

During test phase for different operating systems, different configurations, use virtualization (virtual machine) instead of adding new computers or servers. Virtualization can also be an alternative when we need to add more servers while the usage of available servers are not optimal (underload) yet.

Answer #436955, count #19, created: 2009-01-12 20:45:02.0

Of course you probably don't want to purposefully build software that is energy-ineffecient, but I think this really misses the point.

Productivity software in general should be considered energy-efficient, because no matter how much energy the program itself requires it's going to be orders of magnitude less than the real-world task it replaces. So what if your app requires an extra watt to retrieve and display that fancy document from the file server, if it means a person didn't have to get up and retrieve and actual real paper document from a filing cabinet. Now there's no energy expended in producing and delivering the paper (or the filing cabinet and office space to hold them both, for that matter).

Even eye-candy UI features can be energy efficient. Maybe your new flashy/pretty whizbang UI feature requires more cpu time, but if it helps someone complete a task in less time than it used to you'll still end up with a net savings. Or maybe it helped drive sales of the software in the first place, such that without the nice graphics people would be using inferior software and therefore less productive.

Using StackOverflow as an example, let's say it was built to be very energy inefficient, such that even simple searches require 4 times as much electricity as a comparable service. But if it saves you from spending an extra hour Googling those other sites it comes out to a net gain.

In other words, if you believe in you have a quality product, then you have an energy efficient product. It's much more important to focus on providing a good workflow that people will actually use.

Answer #436981, count #20, created: 2009-01-12 20:56:25.0

Set up your IDE to show a black background rather than white. This will give you extra battery time on a laptop. OS X users can actually invert the entire OS's colours (black <-> white) by pressing CTRL+ALT+SHIFT+COMMAND+SPACE+BACKSPACE+INSERT+8 (or something like that).

Answer #437125, count #21, created: 2009-01-12 21:43:19.0

Develop on VIM with a black background. Build your apps to have dark backgrounds.

Answer #439660, count #22, created: 2009-01-13 16:24:17.0

Support older operating systems and hardware, some hardware upgrades are driven by new software.

Answer #439770, count #23, created: 2009-01-13 16:50:31.0

Work from home. I did a quick estimate of kilos of CO2 put into the atmosphere by my commute, and it's frightening.

Answer #443832, count #24, created: 2009-01-14 17:04:37.0

In: A Shortcut Through Time - The Path to the Quantum Computer, by George Johnson

enter image description here

he states:

"In the early 1960s, a physicist named Rolf Landauer proved that every time a bit is erased from a register, a minimum amount of heat is dissipated: wasted energy. ...

The only way to eliminate the heat is to avoid erasing the information in the first place, and that means saving all the intermediary results. As circuitry continues to shirnk and cooling technologies are exploited to their limits, engineers are beginning to confront the problem of how to eliminate this final source of waste. ...

A farsighted few began thinking about how to make reversible gates for a future breed of classical computers - gates that do not throw out information. Given the output, you know what the input must have been. A whole subfield called "reversible computing" has emerged in which circuits are designed, and sometimes constructed, that preserve every step of the computation. ...

The work is not all theoretical. A few energy-efficient experimental chips, made entirely from reversible components have been assembled. And some circuits have been incorporatied into laptop computers to save battery power."

Reversible computing includes both hardware and software technologies.

Search "reversible computing" on Google for more on this.

Answer #444465, count #25, created: 2009-01-14 20:05:40.0

There is an excellent PDC video here on the topic wrt Windows:

http://channel9.msdn.com/pdc2008/PC19/

"Inefficient background activity has a dramatic impact on system performance, power consumption, responsiveness, and memory footprint. This session demonstrates best practices for background process design and dives deep on the capabilities of the Service Control Manager (SCM) and Task Scheduler."

Answer #1157554, count #26, created: 2009-07-21 06:25:43.0

Don't open your computer ;)

If you need to open your computer, don't send spam mail :O

If you need to send an email, you must be use plain text. Don't use the HTML format and pictures.

Don't browse rich web site :P (Like as this site.)

Joke, joke...

Answer #1516067, count #27, created: 2009-10-04 09:59:08.0

As for energetic efficiency, from now on all my software (tools to) will put the monitor on standby if the software or the computer is not used for x minutes.

Of course there many tools and system way to do this, but not every user know about it or know how to do it. So from now on it’s a standard feature.

Saving the planet one disconnected monitor at a time.

Answer #2032183, count #28, created: 2010-01-09 03:00:55.0

I think something that is going to help here is the advent of cloud computing.

Now that you are going to be billed for every CPU second, every HTTP request, every byte of storage, there is much more motivation to code economically.

Answer #4114845, count #29, created: 2010-11-06 19:42:51.0

write all your code by hand. It will take longer therefore you will think more about what you write. End result is less bugs. Then get a fast typer to type it all up for you on an electronic typewriter(using very little electricity).

At bug review: put all your code on an ereader(you'll be looking at the same page, so no page refreshes!) to look for bugs. Unit test by hand.

for compiling: Use an interpreted language instead.

Title: Software performance (MCPS and Power consumed) in a Embedded system Id: 506452, Count: 5 Tags: Answers: 3 AcceptedAnswer: null Created: 2009-02-03 09:14:53.0 Body:

Assume an embedded environment which has either a DSP core(any other processor core).

If i have a code for some application/functionality which is optimized to be one of the best from point of view of Cycles consumed(MCPS) , will it also be a code, best from the point of view of Power consumed by that code in a real hardware system?

Can a code optimized for least MCPS be guaranteed to have least power consumption as well?

I know there are many aspects to be considered here like the architecture of the underlying processor and the hardware system(memory, bus, etc..).

Popularity: 7.0 Answer #506489, count #1, created: 2009-02-03 09:32:14.0

Very difficult to tell without putting a sensitive ammeter between your board and power supply and logging the current drawn. My approach is to test assumptions for various real world scenarios rather than go with the supporting documentation.

Answer #506685, count #2, created: 2009-02-03 10:50:20.0

No, lowest cycle count will not guarantee lowest power consumption.

It's a good indication, but you didn't take into account that memory bus activity consumes quite a lot of power as well.

Your code may for example have a higher cycle count but lower power consumption if you move often needed data into internal memory (on chip ram). That won't increase the cycle-count of your algorithms but moving the data in- and out the internal memory increases cycle-count.

If your system has a cache as well as internal memory, optimize for best cache utilization as well.

Answer #507424, count #3, created: 2009-02-03 15:01:43.0

This isn't a direct answer, but I thought this paper (from this answer) was interesting: Real-Time Task Scheduling for Energy-Aware Embedded Systems.

As I understand it, it trying to run each task under the processor's low power state, unless it can't meet the deadline without high power. So in a scheme like that, more time efficient code (less cycles) should allow the processor to spend more time throttled back.

Title: How can I measure the energy consumption of my application on Windows Mobile and Windows CE? Id: 724349, Count: 6 Tags: Answers: 3 AcceptedAnswer: 725316 Created: 2009-04-07 06:30:31.0 Body:

I want to measure the energy consumption of

  • my own application (which I can modify)
  • 3rd party applications (which I can't modify)

on

  • Windows CE 5.0
  • Windows Mobile 5/6

Is there some kind of API for this?

If not, can I measure other values which I can use to estimate the energy consumption?

I don't need an exact value like 20 mAh (although that would be nice) A relative value would suffice, like: "Starting from 100% to 0% charge status, around 20% of the fully charged battery was used by this application"

On the other hand it is very important that the measurement is specific to a single application, i.e. I don't want aggregated measurements for a group of applications, like, "those three applications together consume ..."

Popularity: 21.0 Answer #724405, count #1, created: 2009-04-07 06:58:25.0

If you really want to call the api's yourself I wouldn't know which one to call, but if you just want to know how the power consumption is while running certain applications you could use the acbPowerMeter application.

acbPowerMeter charts the real-time power usage of your device. This utility is very light-weight, allowing benchmarking of your battery usage.

Answer #724414, count #2, created: 2009-04-07 07:02:48.0

That seems like a rather difficult thing to measure, because you can't really isolate a single process to run by itself. In fact, if you tried to do so you'd run into difficulty defining what constitutes a "single process" - is it just the userspace code that belongs to that program? Or do you include the kernel code executed on behalf of the program as well? What if the OS optimizes kernel code so that similar requests from different programs are handled together, using a nearly constant amount of energy? Then you couldn't even separate out the energy usage by program.

In a case like this, my inclination would be to measure the expectation value, essentially the average amount of energy used by the application. Ideally you'd start with a large number of systems, all identical except that half of them have the application running and half of them don't. Let each of the systems run under whatever operating conditions you want to test under (same conditions for all devices, of course, except for the fact that half of them are running the app and half are not), and either measure the rate of energy consumption using the standard API, or let the batteries run out and measure how long it takes each unit to drain its battery. Then compare the average result from the devices that were running the app vs. the average result from those that weren't, and you can figure out how much the program increases the power consumption of the computer.

Answer #725316, count #3, created: 2009-04-07 11:58:35.0

There's an API for getting information on power consumption but the accuracy of the information returned by this API is OEM dependent (some OEMs don't make the information available at all). More information with example code on this API can be found at http://www.codeproject.com/kb/mobile/Wimopower1.aspx (screenshot of example programs below). As you can see from the screenshot you can tell the battery's voltage, current pull, the type of battery, and more. An accurate measure of instantanious power consumption requires external hardware (and I am assuming you don't want to make a hardware investment to make your measurements).

Take whatever measurements that you get to be relative as they may not be representative of what you would observe when running your program on a different device.

Screenshot of Example Program

Title: Why does the Main Run Loop put an Execution Thread to sleep if there is no event occuring? Id: 734104, Count: 7 Tags: Answers: 3 AcceptedAnswer: 734118 Created: 2009-04-09 12:59:04.0 Body:

I dont understand why Threads have to "sleep" if there is no event in the Application Run Loop. Does this save energy, or memory, or what else?

When there comes in an event from an source input, then it would wake up that Thread again to handle this event. After that, it would sleep again, for the case that there is no more event in the queue waiting to be processed.

Does someone have a good explanation for this sleeping issue?

Popularity: 5.0 Answer #734111, count #1, created: 2009-04-09 13:01:26.0

A sleeping thread allows an OS scheduler (a subsystem which allocates CPU time to threads) to run other threads.

Answer #734118, count #2, created: 2009-04-09 13:02:35.0

It's not an issue. It's a good thing. What else would the main thread be doing? It shouldn't be processing long-running tasks - that would reduce the "snappiness" of the UI when a UI event comes in.

It shouldn't be tight-looping until an event comes in - that would take up processor time which can otherwise be sensibly used by other applications.

Sleeping (or rather waiting) is exactly what you want it to do - so it can wake up as soon as it has useful work to do, but otherwise doesn't impact the system.

Answer #734161, count #3, created: 2009-04-09 13:15:17.0

As others have said, putting the thread to sleep allows other threads to be executed.

I'll add that since you are probably referring to the iPhone (based on most of your other questions) this will also be useful even if no other threads need to run as the CPU power consumption will drop when it is idle.

Title: How to design a less power consuming mobile application? Id: 737419, Count: 8 Tags: <.net> Answers: 2 AcceptedAnswer: 737501 Created: 2009-04-10 12:02:00.0 Body:

I will soon work on a project about a mobile application. This application will work on a PDT with Windows Mobile and we will use Visual Studio and .NET to developp it.

This application will intensively use Wifi and need to consume as little power as possible. I found on the internet a lot of stuff concerning embedded software and real time systems, which deal with power management. But this stuff is very hardware related, and does not talk about software design.

I also found some interesting best practices but that mainly focus on the code of the application (for example, close handles as soon as possible or use few I/Os).

I would like to know if you are aware of some leads concerning the architecture or the design of such application.

I also saw advice about the use of event driven architecture: is it so useful concerning power saving ? And is it usable with the Compact Framework ?

Thanks for your help.

Edit: Ok, so Dave gave us some clues, that we could call architecture decisions. So i think i see clearly what could be done at two differents levels:

  • at a high-level, such decisions as Dave's ;
  • at a low level of abstraction, close to the code, tricks and tips that minimize the battery consumption.

What about at a middle level of abstraction (during the design phase) ? Is there some methodology for low-power software design (design patterns, what so ever...) ?

Links: http://msdn.microsoft.com/en-us/library/aa455167.aspx

http://www.eventhelix.com/RealTimeMantra/Basics/

Popularity: 7.0 Answer #737450, count #1, created: 2009-04-10 12:15:27.0

Use the wifi and other power-intensive functions as infrequently as possible. If it's practical, batch wifi transmission when a certain number of requests are pending rather than doing it on-demand.

Answer #737501, count #2, created: 2009-04-10 12:36:32.0

Perhaps you could link to the best practices you've found. What kind of leads besides them are you expecting? I suppose this was part of what you found, whereas this is more targeted towards laptop multicore processors.

Windows Mobile is a soft real time system at best, and very far from hard real time. I doubt you'll find much use in that kind of description and advice.

Otherwise, I'd say you have fairly standard stuff. Keep off the Wifi if you can, as well as other devices. Use cacheing if you have the memory available (but measure what's happening so the cache doesn't become a liability). Never, ever do an idle loop, but use Thread.Sleep() or better, try to make everything event-driven, with short processing bursts. Threads can be your friends, used wisely.

And, of course, profile like crazy. The more efficient your code is in terms of CPU usage, the better.

But more specific advice would have to depend on the problem you're trying to solve. Why is your application Wifi-intensive? What information does it need to receive or send? Who are the users, and where do they move? Are there any heavy calculations involved? How much user interface do you need to present? Have you targeted specific hardware yet (specific CPU and WLAN interfaces will have different power consumption behaviours).

Title: Developing power consumption aware applications Id: 955209, Count: 9 Tags: Answers: 1 AcceptedAnswer: 955333 Created: 2009-06-05 10:27:13.0 Body:

Firstly, please don't move to serverfault. It is indeed a programming question :-)

We are developing occasionally connected applications. These applications reside on laptop and handhelds. In my case, the application runs on a small servlet container (like jetty).

The requirement is that if the system is idle, the application should suspend itself. If the lid of the laptop is closed, then the application and the servlet container are in a suspend mode.

Are such things a feature of the OS itself or can such power awareness be built into the application? If it can be built into the application, how ?

Popularity: 6.0 Answer #955333, count #1, created: 2009-06-05 11:02:45.0

Every OS provides a set of APIs and notifications you can use and subscribe to appropriately. Windows, for example, sends a WM_POWERBROADCAST message to all windows before an power event happens. Read on it more in Power Management section at MSDN.

However you want the power-aware features in a java application, which will require you to use some sort of a JNI bridge. There's a codeproject article on detecting standby and denying the request (although denying power transitions is not possible in Windows Vista/7 any more).

Title: What line of code could I use in C++ to disable energy saver? Id: 1003394, Count: 10 Tags: Answers: 4 AcceptedAnswer: null Created: 2009-06-16 19:07:15.0 Body:

I want to prevent the monitor from going to sleep (the windows setting, not the monitor setting). I am using c++. What call do I make?

Popularity: 16.0 Answer #1003417, count #1, created: 2009-06-16 19:10:33.0
class KeepDisplayOn { public: KeepDisplayOn() { 	mPrevExecState = ::SetThreadExecutionState(ES_DISPLAY_REQUIRED | ES_SYSTEM_REQUIRED | ES_CONTINUOUS); 	::SystemParametersInfo(SPI_GETSCREENSAVETIMEOUT, 0, &mPrevScreenSaver, 0); 	::SystemParametersInfo(SPI_SETSCREENSAVETIMEOUT, FALSE, NULL, 0); } ~KeepDisplayOn() { 	::SystemParametersInfo(SPI_SETSCREENSAVETIMEOUT, mPrevScreenSaver, NULL, 0); 	::SetThreadExecutionState(mPrevExecState); } private: UINT				mPrevScreenSaver; EXECUTION_STATE		mPrevExecState; }; 
Answer #1003426, count #2, created: 2009-06-16 19:12:20.0

SetThreadExecutionState(ES_DISPLAY_REQUIRED|ES_CONTINUOUS);

Answer #1004173, count #3, created: 2009-06-16 21:56:38.0

A simpler way that doesn't modify global system state like the first response does:

In your window procedure, add a handler for WM_SYSCOMMAND. When wParam is SC_MONITORPOWER, return 0 instead of deferring to DefWindowProc. (When wParam is any other value, make sure you either handle the message or pass it to DefWindowProc. Otherwise the user will have difficulty adjusting your window at runtime.)

Answer #8402710, count #4, created: 2011-12-06 15:48:38.0

Wiggle the mouse every minute or so.

mouse_event(MOUSEEVENTF_MOVE,1,0,0,0); mouse_event(MOUSEEVENTF_MOVE,-1,0,0,0); Sleep(60000); 
Title: Symbian Power Status Notification Id: 1168985, Count: 11 Tags: Answers: 1 AcceptedAnswer: null Created: 2009-07-23 00:29:34.0 Body:

I have an application that uses memory card and I need to do some save/restore state operations when the power of the card reader is turned off and on back again. This usually happens after some time of activity and phone goes to a power save mode. In Windows Mobile I solved the same problem by receiving power notifications from the system and take the appropriate action. I would like to know if there is an equivalent of this messages in Symbian?

To clarify, I am not interested in current status of ac cable connected, battery level. I just want to receive notifications before phone goes to sleep mode and after it wakes up.

Thanks.

Popularity: 4.0 Answer #1176297, count #1, created: 2009-07-24 07:55:11.0

One possibility is to use the User Inactivity timer, which is the stimulus for the device dropping to power-save mode: See Forum Nokia

Title: Does keeping a file handle open on an iPhone drain an apprecible amount of battery power? Id: 1215016, Count: 12 Tags: Answers: 1 AcceptedAnswer: null Created: 2009-07-31 21:37:00.0 Body:

I am taking a stream of readings from the GPS controller and storing them in an array, and occasionally writing the readings in the array to a file.

Should I keep the file handle to the file open, or open and close the file each write? And how much does this matter in the scheme of power consumption?

Popularity: 3.0 Answer #1215106, count #1, created: 2009-07-31 22:04:10.0

Once a file pointer is open, there shouldn't really be any computational overhead or extra battery use in keeping it open. On the contrary, opening and closing the file handle will probably use more power, opening and closing locks, etc.

Title: What is relation between MCPS(million cycles per second) and power consumed Id: 1268112, Count: 13 Tags: Answers: 1 AcceptedAnswer: null Created: 2009-08-12 19:07:36.0 Body:

I have been working on a ARM cortex A8 board on mp3 decoder.

While doing this i have a requirement saying the mp3 decoder solution i am doing should consume 50 milli-watts of power. This generated few questions in my mind when i thought about it:-

1.) I recall that there is some relation between the Core Voltage applied(V), the clock frequency(f) of a processor and power consumed(P) as something like P is directly proportional to the voltage and frequency squared. But is the exact relation. Given operating clock Frequency, voltage of a processor, how can we calculate power consumed by it.

2.) Now if i get the power consumed from step 1.) at some clock frequency, and i am told that the decoder solution i am giving, can consume only 50 milli-watts, how can i get the maximum limit on MCPS, which will be the upper bound on the MCPS of my decoder solution running on that hardware board?

Can i deduce that if power obtained as in step 1.) say P, is consumed at frequency F, so for 50 milli-watts power, what is clock frequency frequency and calculate accordingly the frequency. And then call this frequency as my code MHz (MCPS) upper bound?

Basically how does one map(is there any equation) power consumed by a software to MCPS consumed

I hope this is relevant here, or should it go to superuser?

Thank you. -AD.

Popularity: 10.0 Answer #1268155, count #1, created: 2009-08-12 19:16:21.0

It really depends on the architecture.

From their own page:

Core area, frequency range and power consumption are dependent on process, libraries and optimizations.

Power with cache (mW/MHz) <0.59 <0.45

Basically, it states that you can't accurately calculate the power consumption, so your best bet would be to do some measurements yourself. Try to run a full CPU-usage application and meassure the power consumption. It will give you some idea of the max-load, which will be a good start for you (to know how much you need to optimize your code and insert idle points).

Title: PHP vs. Java are there energy consumption differences? Id: 1318851, Count: 14 Tags: Answers: 12 AcceptedAnswer: null Created: 2009-08-23 15:49:40.0 Body:

I heard a rumor, that Java consumes less energy than PHP and was wondering if and how this would be true. I'm currently working in a company where we base most of our apps on PHP. Power consumption has never been the problem for us, but we are working on bigger projects where it might matter. We love PHP for web developing and are wondering how such a rumor can spread and if it is true at all.

The example I heard was that Facebook is switching for exactly that reason to Java (I can't seem to find any of this stuff on google though).

Since a customer of mine is asking me this question I would love proof if it is true.

Popularity: 91.0 Answer #1318866, count #1, created: 2009-08-23 15:55:03.0

I was surprised by this question until I did a Google search, which turned this up. It's a serious issue, one that I wouldn't have thought of.

Now that I am thinking about it, I think it becomes an issue of who pays the electric bill. Sun's Java strategy was about selling servers: big iron for the back, thin clients for the front end.

Perhaps technologies like Flex move more of the work back to the client and leave them with a greater percentage of the electric bill.

But I'd be surprised to see a ranking of languages by energy use.

A very interesting question. I'm voting it up.

What a fascinating problem. Wouldn't it be interesting to write an application in a number of languages, deploy them on identical hardware, and measure the power consumption? I'd love to see it.

If you really get crazy, what about a performance monitoring tool that, in addition to showing you where memory and CPU were consumed in each part of your app, would also show you where the most energy was being used?

Now I wish I could vote this question up again.

Answer #1318879, count #2, created: 2009-08-23 16:02:33.0

Like many comparative questions here, you'll probably need to come up with a benchmark to really determine whether that's true.

lesswatts.org has a bit of information on applications power management, as well as several other aspects of power consumption on Linux systems. As a side note, they seem to be using PHP, so that might be worth something in itself :)

They keep repeating that you should use PowerTOP to determine which applications are causing the most power consumption, and you can see from the screenshot that they are checking wakeups from idle, at least.


Most of the time, a web server is sitting idle, then it "serves" for a very brief moment, then it goes back to waiting for the next connection to serve. In that regard, PHP would contribute very little to the entire process: only the serving portion. So I wonder if the actual benchmark was a comparison of a particular Java-based web server vs. Apache/PHP serving similar pages. If that was the case, it's not really a fair comparison of PHP and Java -- either of the two considered as serving an actual page is only active for milliseconds at a time, typically. I would think the actual web server, the one who's selecting or polling connections, is the one that would be hogging power.

Answer #1318882, count #3, created: 2009-08-23 16:03:39.0

By energy consumption do you mean Watts power consumption?

I'm not 100% sure, but even if this is true, I think this is similar to optimizing a part of your code, which is executed in 0.01% of your program's runtime.

The problems that will be caused by the switching (changing production/release platforms, learning curve time loss, new business software costs, etc) will be pretty drastic. I can't see such an important decision being made, except after serious and company-specific business analysis task and the corresponding results of it.

However, this should make for an interesting discussion.

Answer #1318894, count #4, created: 2009-08-23 16:08:29.0

Computers don't particularly care if they're executing Java or PHP. Power consumption is pretty much the same. The question then becomes a question of performance - if you can serve more requests with one server you'll need less servers and consume less power. Or alternatively, if you're not doing web scale applications, serve your requests quicker and spend more time idling, which consumes less power.

Given pure Java and pure PHP, Java as a statically typed JIT'ed language is of course faster. The question is rather which one can you make faster given the team members and development effort available to you.

My take is that the best way is to mix languages, use existing Java based infrastructural tools, such as Terracotta to build the performance critical parts and something more nimble to build complex but not that heavy business and presentation logic.

Answer #1318900, count #5, created: 2009-08-23 16:09:16.0

I really really doubt it is a language only issue.

The platforms in question have so much variability to render any generic comparison moot. To name a few variability points.

  • Servlet container for Java (Tomcat, Glassfish, Websphere, Jetty, ...)
  • Web server for PHP (Apache, IIS, lighttpd, nginx, ...)
  • Opcode caches for PHP
  • Libraries and framewokrs used
  • Operating systems
  • Hard disks involved
  • Cooling
  • Algorithms on the application itself

I really doubt you can isolate so many variables into a useful metric. At most you can pick two equivalent applications (noting all the platform's choices) using the same hardware and compare them. Then improve the worst until it tops the better one. The proper measurement would be both the watts per hour and requests per second, I think.

What's noted in Ants' answer (upvote him) is the crucial point though: the better performing platform will always be more power efficient, given enough demand, because it'll be able to serve the same amount of requests with less hardware.

But which platform is better performing is not merely language dependent, depends on the things noted above (and some more).

Answer #1318943, count #6, created: 2009-08-23 16:27:26.0

I understand that many large companies have difficulty with growth in demands for resouces in their data centres. These include both floor space and power consumption. So I can well believe that rationalisation of applciations, favouring low resource consumption is a strategy that is being adopted.

Whether the use of Java, PHP or any other implementation technology is likely to be the determinant of power consumption is less obvious to me.

If we implemented a particular piece of function optimising for resource consumption in, say, Interpreted Basic, Java and C, which would we expect to need most execution resources.

I would need to see some hard evidence, but I could believe that a pure interpreted language might consume more than, Java, and even with JIT etc. it in turn might consume more than C.

Now just suppose (purely hypothetically, I'm not saying this is the case) that it was more effort to develop in C than in Java, and more effort to develop in Java than in Basic. How Would you trade off development and maintenance effort against that resource consumption?

Really tricky to do. If someone told me they were moving to Java soleley on the basis of power consumption I'd really want to know more. What are the costs in doing so? Where was the break-even point? [BTW - I work almost exclusively in Java, I'd love to be able to say "we win, see how low power we are!"]

Answer #1318954, count #7, created: 2009-08-23 16:33:18.0

This is the same question as what language has the best performance. Now then, for just about every project most programmers ever get in contact with. How you write the system by far outweighs the technologies used (given that the system is speced ok).

The choice of language wrt performance is imho for embedded systems and scientists. You choose language according to what problem to solve, extremely seldom for how cpu efficient it is.

Again, it is how you design and write the system that determines how efficient it will be.

Answer #1318982, count #8, created: 2009-08-23 16:44:27.0

This is a difficult question to answer unless, as was mentioned about benchmarks, you set some rules for comparison.

For example, comparing a Java3D first-person shooter to a PHP webpage would be unfair, as Java would lose.

If you look at Java frameworks, then you may want to compare these three: Tomcat + JDK 6 + JSP Apache + PHP Scala + Lift framework

I included Scala as it compiles to java bytecode and I expect it will be cheapest for power.

I am not certain what would be the winner, I would bet on Scala, but you would want to ensure that you have the same application implemented and then just compare the power usage.

PHP will probably win as apache and PHP seems to be lower on memory usage than Java, but I can't really compare Scala and PHP.

The big unknown for me is that if you call the same java code then it has already been turned into native code and so will run more quickly, but JSP should be precompiled to take advantage of Java.

But, if you use Web2.0 technologies then it changes, as you put most of the load on the browser, if you have a large javascript application, then you are just making server calls, which will reduce power usage on the server as the render work is passed onto the browser. At that point the JIT for Java should come into play, and Java or Scala I would expect to be lower on power usage.

A big test, IMO, would be to see if we can get the same performance as we reduce the size of the machine, so, if you need 3 computers for PHP or Java, that are load balanced, and 1 Scala machine can have the same performance, then scala (using Lift) will win.

Answer #1319178, count #9, created: 2009-08-23 18:12:52.0

Power efficiency is an entire field in itself. Measuring performance is usually the best way to go (given the same hardware). If different hardware is being compared, then that's a whole 'nother ball game.

So given the same hardware, if a software stack can perform better than another software stack, then it means that the better performing software stack will use less power "per request" than the other. If the performance difference is great enough for you to consolidate your servers into less, then it's a even bigger win!

There's a lot of other considerations:

Data Centers Consider that servers are housed in data centers that are cooled. Servers generate heat and the heat needs to be removed to protect the hardware. A/C units do not have infinite granularity. Their efficiency usually comes with volume. So if I reduce my number of servers from 2 to 1, I may have saved the power consumption of one server, but probably not much in cooling costs. But if changing my architecture allows me to cut out 100 servers... that's big savings!

Hardware and Peripheral Devices Using the most power efficient SW stack and running it on a Pentium 4 server = stupidity ;) The most energy efficient software cannot make up for inefficient hardware. One interesting lesson here is: "let the hardware guys worry about power". You worry about getting your application to market. When your application can generate revenue, you can always buy the latest 16-core Core 5 hexa-deca CPU and instantly get your energy efficiency ;)

Virtualizaton If your application is low volume, consolidating it into a virtual machine running on multi-core system would probably save you more energy than rewriting it in your most energy efficient SW stack and running it on standalone server.

Programmer Time You need programmers knowledgeable in the "most power efficient" software stack. You must consider if that is the right tools. Programmers use computers to develop the software and the more time they need to develop (if they're constrained to the wrong tools), the more power is consumed. Not to mention you will have to pay them for more hours. This usually overrides any energy consumption concerns because the costs here are magnitudes higher.

If your only concern is energy efficiency, yes, use all the tools in the bag to get you there. But most of the time, that is only one small variable in the overall scheme of things, and also the least of your cost.

Answer #1319437, count #10, created: 2009-08-23 20:14:40.0

More efficient code does consume less resources, including power. Java in general has a faster, more efficient implementation, so all other factors being constant, Java will probably consume less energy running on a server.

But all other factors are not constant. There's no telling what changes based on your decision to use PHP or Java. For example, Java takes longer to develop, which means that Java programmers have their computers on for longer, and their power usage from that may surpass your savings on the server.

If you are Google, Amazon, or some other company who serves literally billions of requests each day from thousands of servers, I would worry about this. Otherwise, your scale isn't large enough to make any positive assertions about energy consumption, so any decision you make is just as likely to be counterproductive as it is to be productive because it's impossible to include all the relevant factors.

A relevant example is a few months back when a rumor went around that you could save energy by setting your desktop background to black. The thinking was that black == no light, so you weren't using as much light. Google (one of the few companies with enough power usage to make this kind of research productive) ran some experiments and did some research, and discovered that LCD screens produce white light no matter what, and filter it by passing a second current through the pixels in a different way. So by setting a pixel to black, you are setting its filtration to the maximum, actually using the most energy possible. The most energy-saving desktop background (at least on an LCD screen) is a white background. However, since white backgrounds hurt people's eyes, few people set their desktop backgrounds to white and the rumor that black backgrounds save energy is still prevalent.

So, to make a long story short, worrying about this is likely to do more harm than good.

Answer #1323532, count #11, created: 2009-08-24 16:56:05.0

Obviously modern computing techniques use a lot more resources than they used to, and this directly equates to power consumption. Once we'd use a plain old socket to transfer binary data over the network and a 100Mhz processor would handle it, now we have a software stack where the data is converted to XML text, then passed over a web service over http over that same socket, and we wonder why we need a 3Ghz processor with spods of RAM to get decent performance :)

So that's the issue, the only thing you can do about it is to make your code more efficient. Typically this means use a lower level language, and less reliance of general-purpose frameworks or libraries. Definitely don't try to layer software stacks if you can't help it.

Modern programming doesn't like this - there's a push toward programmer productivity, employing cheaper programmers, and re-engineering code all the time (ie rewriting it) so code needs to be easy to create (and sometimes maintain). As a result, to get the best you'll need to trade off these factors. The industry standard way of doing this is simply to mix systems according to their use.

Currently the ultimate performance is C/C++ code, and the ultimate programmer productivity seems to be scripting languages. So, the ideal best of both is to write your main business logic in a script language like Python, and use it to call dedicated 'power helpers' written in C/C++. If you need more performance, you can write more of your code in the underlying C/C++. If you need more RAD fast development, write more in the script language.

My advice for the OP is not to rewrite in Java, it may be better performing overall, but you'll cost so much it might not be worth it. Instead, take the intensive bits of your app and rewrite those as efficiently as you can and call them from your existing PHP. then take a look at your overall architecture and reduce reliance of inefficient protocols and data structures. For example: if you replace XML with JSON, you'll find you have the same functionality but at a fraction of the data size and resources required to parse and reformat the data.

Answer #1325995, count #12, created: 2009-08-25 04:16:09.0

While many of the facets concerning this questions have been already explored in depth here, there is at least one more to look into.

A typical PHP web application duplicates a lot of effort from one request to the next, simply because the PHP execution environment (or context) is not persistent across requests.

Since a typical Java web application does have the ability to persist state directly, without extra steps or cache invalidation, it doesn't need to, for example, perform PHP-style duplicative SQL queries to fetch the same information for each request. Instead, a Java web application often stores the complex information for the current analysis in a native data structure on heap, which is accessed in nanoseconds, rather in milliseconds.

Since the processor has to do significantly less work to perform these basic data access functions, the work requires less power per unit of customer value.

Title: Estimate Power Consumption Based on Running Time Analysis / Code Size Id: 1596252, Count: 15 Tags: Answers: 2 AcceptedAnswer: 1598462 Created: 2009-10-20 17:49:56.0 Body:

I've developed and tested a C program on my PC and now I want to give an estimate of the power consumption required for the program to do a single run. I've analysised the running time of the application and of invidiual function calls within the application and I know the code size both in assembly lines, but also raw C lines.

How would I give an estimate of the power consumption based on the performance analysis and/code size? I suppose it scales with the amount of lines that uses the CPU for computations or does memory access but I was hoping for a more precise answer.

Also, how would I tell the difference between the power consumption on say my PC compared to a on a microchip device?

Popularity: 10.0 Answer #1596313, count #1, created: 2009-10-20 18:01:02.0

Good luck. What you want to do is pretty much impossible on a desktop PC. Best you could probably do would be to measure the from-the-wall power draw at idle, and when running your program, with as few other programs as possible running at the same time. Average the results over 100 or so runs, and you should have a value with accuracy of a few percent (standard statistical disclaimers apply).

On a Microchip device, it should be easier to calculate the power consumption, since they publish (average) power consumption values for the various modes, and the timing is deterministic. Unfortunately, there are so many differences between a processor like that and your desktop processor (word size, pipelining, multiple-issue, multiple processes, etc, etc) that there really won't be any effective way to compare the two.

Answer #1598462, count #2, created: 2009-10-21 02:38:46.0

There is a paper on Intel's website that gives average energy per instruction for various processors. They give 11 nJ per instruction for Core Duo, for example. How useful that'll be for you depends on how much your code looks like the SpecInt benchmark, I guess.

Title: RAM memory reallocation - Windows and Linux Id: 1738515, Count: 16 Tags: <.net> Answers: 8 AcceptedAnswer: 1765175 Created: 2009-11-15 19:28:11.0 Body:

I am working on a project involving optimizing energy consumption within a system. Part of that project consists in allocating RAM memory based on locality, that is allocating memory segments for a program as close as possible to each other. Is there a way I can know where exactly is the position of the memory I allocate (the memory chips) and I was also wondering if it is possible to force allocation in a deterministic manner. I am interested in both Windows and Linux. Also, the project will be implemented in Java and .NET so I am interested in managed APIs to achieve this.

[I am aware that this might not translate into direct energy consumption reduction but the project is supposed to be a proof of concept.]

Popularity: 22.0 Answer #1738580, count #1, created: 2009-11-15 19:45:33.0

In C/C++, if you coerce a pointer to an int, this tells you the address. However, under Windows and Linux, this is a virtual address -- the operating system determines the mapping to physical memory, and the memory management unit in the processor carries it out.

So, if you care where your data is in physical memory, you'll have to ask the OS. If you just care if your data is in the same MMU block, then check the OS documentation to see what size blocks it's using (4KB is usual for x86, but I hear kids these days are playing around with 16M giant blocks?).

Java and .NET add a third layer to the mix, though I'm afraid I can't help you there.

Answer #1738589, count #2, created: 2009-11-15 19:47:30.0

I think that if you want such a tide control over memory allocation you are better of using a compiled language such as C, the JVM, isolated the actual implementation of the language from the hardware, chip selection for data storage included.

Answer #1764984, count #3, created: 2009-11-19 17:14:20.0

In .NET there is a COM interface exposed for profiling .NET applications that can give you detailed address information. I think you will need to combine this with some calls to the OS to translate virtual addresses though.

As zztop eluded to, the .NET CLR compacts memory everytime a garbage collection is done. Although for large objects, they are not compacted. These are objects on the large object heap. The large object heap can consist of many segments scattered around from OS calls to VirtualAlloc.

Here are a couple links on the profiling APIs:

http://msdn.microsoft.com/en-us/magazine/cc300553.aspx

http://blogs.msdn.com/davbr/default.aspx

Answer #1765018, count #4, created: 2009-11-19 17:18:14.0

Is pre-allocating in bigger chunks (than needed) an option at all? Will it defeat the original purpose?

Answer #1765175, count #5, created: 2009-11-19 17:43:08.0

You're working at the wrong level of abstraction.

Java (and presumably .NET) refers to objects using handles, rather than raw pointers. The underlying Java VM can move objects around in virtual memory at any time; the Java application doesn't see any difference.

Win32 and Linux applications (such as the Java VM) refer to memory using virtual addresses. There is a mapping from virtual address to a physical address on a RAM chip. The kernel can change this mapping at any time (e.g. if the data gets paged to disk then read back into a different memory location) and applications don't see any difference.

So if you're using Java and .NET, I wouldn't change your Java/.NET application to achieve this. Instead, I would change the underlying Linux kernel, or possibly the Java VM.

For a prototype, one approach might be to boot Linux with the mem= parameter to restrict the kernel's memory usage to less than the amount of memory you have, then look at whether you can mmap the spare memory (maybe by mapping /dev/mem as root?). You could then change all calls to malloc() in the Java VM to use your own special memory allocator, which allocates from that free space.

For a real implementation of this, you should do it by changing the kernel and keeping userspace compatibility. Look at the work that's been done on memory hotplug in Linux, e.g. http://lhms.sourceforge.net/

Answer #1776759, count #6, created: 2009-11-21 20:56:38.0

The approach requires specialized hardware. In ordinary memory sticks and slots arrangements are designed to dissipate heat as even per chip as possible. For example 1 bit in every bus word per physical chip.

Answer #1776789, count #7, created: 2009-11-21 21:08:42.0

This is an interesting topic although I think it is waaaaaaay beyond the capabilities of managed languages such as Java or .NET. One of the major principals of those languages is that you don't have to manage the memory and consequently they abstract that away for you. C/C++ gives you better control in terms of actually allocating that memory, but even in that case, as referenced previously, the operating system can do some hand waving and indirection with memory allocation making it difficult to determine how things are allocated together. Even then, you make reference to the actual chips, that's even harder and I would imagine would be hardware-dependent. I seriously would consider utilizing a prototyping board where you can code at the assembly level and actually control every memory unit allocation explicitly without any interference from compiler optimizations or operating system security practices. That would give you the most meaningful results as it would give you the ability to control every aspect of the program and determine, definitively that any power consumption improvements are due to your algorithm rather than some invisible optimization performed by the compiler or operating system. I imagine this is some sort of research project (very intriguing) so spending ~$100 on a prototyping board would definitely be worth it in my opinion.

Answer #1783585, count #8, created: 2009-11-23 14:48:51.0

If you want to try this in a language with a big runtime you'd have to tweak the implementation of that runtime or write a DLL/shared object to do all the memory management for your sample application. At which point the overall system behaviour is unlikely to be much like the usual operation of those runtimes.

The simplest, cleanest test environment to detect the (probably small) advantages of locality of reference would be in C++ using custom allocators. This environment will remove several potential causes of noise in the runtime data (mainly the garbage collection). You will also lose any power overhead associated with starting the CLR/JVM or maintaining its operating state - which would presumably also be welcome in a project to minimise power consumption. You will naturally want to give the test app a processor core to itself to eliminate thread switching noise.

Writing a custom allocator to give you one of the preallocated chunks on your current page shouldn't be too tough, but given that to accomplish locality of reference in C/C++ you would ordinarily just use the stack it seems unlikely there will be one you can just find, download and use.

Title: Hard disk drive and RAM memory - Dynamic Power Management Id: 1761611, Count: 17 Tags: Answers: 1 AcceptedAnswer: null Created: 2009-11-19 07:54:15.0 Body:

From what I have seen there is a pretty good support for dynamic power management in the both Windows and Linux when it comes to the CPU (scaling frequency so as to reduce energy consumption). Is there similar support for managing the Hard Disk Drive and the RAM (spinning down the HDD, lowering RAM frequency or anything that might result in power consumption reduction)?

Popularity: 3.0 Answer #1761678, count #1, created: 2009-11-19 08:08:06.0

For the HDD, use hdparm with -S to define after how much time it should spin down. To make this work, you must disable all processes which access the disk regularly like cron and flushd. The latter is a bit dangerous because it flushes memory caches to disk. You can simulate it by calling sync manually but if your computer crashes unexpectedly, then you can loose a lot of data.

So in the end, sending the disk to sleep doesn't really help unless you're not using your computer for long periods of time. But there are other methods to make it use less power:

  • Let it run. Spinning the disk up needs a lot of power.
  • Mount with noatime reduces the write accesses a lot.
  • Replace the disk with an SSD. Even a small SSD for the OS plus the swap partition goes a long way.
  • Replace the disk with a smaller one (i.e. 3.5" -> 2.5" -> 1.8").

As for the RAM, I know nothing what you can manipulate. I guess you could switch off RAM areas which aren't in use but current OSs use free RAM as a hard disk cache, so you won't find much "free" RAM that can be switched off. So here, your best option is to installed less RAM.

Title: Server socket programming in Android 1.5, most power efficient way? Id: 1831952, Count: 18 Tags: Answers: 1 AcceptedAnswer: 1839375 Created: 2009-12-02 10:17:13.0 Body:

I am doing a project where I have too develop an application that listens for incoming events by a service. The device that has to listen too events is an Android phone with Android SDK 1.5 on it. Currently the services that call events only implement communication trough UDP or TCP sockets. I can solve my problem by setting up a ServerSocket, but i doubt that's the most power efficient way. This application will be running most of the time, with Wi-Fi on, and I'd like too reach an long battery duration. I've been looking for options on the internet for my question for a while but i couldn't get a real answer. I've got the following questions:

  • What is the most efficient way too listen to incoming events? Should I make an ServerSocket? or what are my options?
  • Are there any other implementations that are more power efficient?

Ive been also thinking of implementing communication trough XMPP. Not sure if this is the best way. I'm not forced too an specific implementation. All suggestions are welcome!

Thanks for the help,

Antek

Popularity: 29.0 Answer #1839375, count #1, created: 2009-12-03 11:34:52.0

You already listed the possible choices. If the app has to be able to handle events, it also needs to be running all the time. afaik there is no push-notification-service that automatically calls your application, like on the iPhone.

I think using a protocol like XMPP is the most easy solution. Having your own ServerSocket would also mean the server has to send requests to different IPs whenever you are switching your network.

Title: How to measure the power consumed by a C algorithm while running on a Pentium 4 processor? Id: 1998778, Count: 19 Tags: Answers: 8 AcceptedAnswer: null Created: 2010-01-04 10:46:04.0 Body:

How can I measure the power consumed by a C algorithm while running on a Pentium 4 processor (and any other processor will also do)?

Popularity: 34.0 Answer #1998794, count #1, created: 2010-01-04 10:49:34.0

Run your algorithm in a long loop with a Kill-a-Watt attached to the machine?

Answer #1998795, count #2, created: 2010-01-04 10:50:09.0

Excellent question; I upvoted it. I haven't got a clue, but here's a methodology:

-- get CPU spec sheet from Intel (or AMD or whoever) or see Wikipedia; that should tell you power consumption at max FLOP rate;

-- translate algorithm into FLOPs;

-- do some simple arithmetic;

-- post your data and calculations to SO and invite comments and further data

Of course, you'll have to frame your next post as another question, I'll watch with interest.

Answer #1998820, count #3, created: 2010-01-04 10:54:12.0

Unless you run the code on a simple single tasking OS such as DOS or and RTOS where you get precise control of what runs at any time, the OS will typically be running many other processes simultaneously. It may be difficult to distinguish between your process and any others.

Answer #1998880, count #4, created: 2010-01-04 11:06:44.0

First, you need to be running the simplest OS that supports your code (probably a server version unix of some sort, I expect this to be impractical on Windows). That's to avoid the OS messing up your measurements.

Then you need to instrument the box with a sensitive datalogger between the power supply and motherboard. This is going to need some careful hardware engineering so as not to mess up the PCs voltage regulation, but someone must have done it.

I have actually done this with an embedded MIPS box and a logging multimeter, but that had a single 12V power supply. Actually, come to think of it, if you used a power supply built for running a PC in a vehicle, you would have a 12V supply and all you'd need then is a lab PSU with enough amps to run the thing.

Answer #1999186, count #5, created: 2010-01-04 12:14:14.0

It's hard to say.

I would suggest you to use a Current Clamp, so you can measure all the power being consumed by your CPU. Then you should measure the idle consumption of your system, get the standard value with as low a standard deviation as possible.

Then run the critical code in a loop.

Previous suggestions about running your code under DOS/RTOS are also valid, but maybe it will not compile the same way as your production...

Answer #1999742, count #6, created: 2010-01-04 14:13:12.0

Sorry, I find this question senseless.

Why ? Because an algorithm itself has (with the following exceptions*) no correlation with the power consumption, it is the priority on the program/thread/process runs. If you change the priority, you change the amount of idle time the processor has and therefore the power consumption. I think the only difference in energy consumption between the instructions is the number of cycles needed, so fast code will be power friendly. To measure power consumption of a "algorithm" is senseless if you don't mean the performance.

*Exceptions: Threads which can be idle while waiting for other threads, programs which use the HLT instruction.

Sure running the processor at fast as possible increases the amount of energy superlinearly (more heat, more cooling needed), but that is a hardware problem. If you want to spare energy, you can downclock the processor or use energy-efficient ones (Atom processor), but changing/tweaking the code won't change anything.

So I think it makes much more sense to ask the processor producer for specifications what different processor modes exist and what energy consumption they have. You also need to know that the periphery (fan, power supply, graphics card (!)) and the running software on the system will influence the results of measuring computer power.

Why do you need this task anyway ?

Answer #2000405, count #7, created: 2010-01-04 16:03:07.0

hey guys thanks for replying.... The code is a very simple C code( not even 150 lines) so I guess we can easily run it under DOS.... I need to do this for my college project and they are insisting on not using any external device or circuitry... Isn't there any way to do it logically.. by just using some operating system commands or something else of that sort..... I looped my code for 10000 times and the number of clock ticks required were 845 and the time required was 46.428570...Cant we perform some manipulation on these values and get some fair idea on the power consumed..... If you want I can upload the whole code

Answer #2000915, count #8, created: 2010-01-04 17:30:09.0

Since you know the execution time, you can calculate the energy used by the CPU by looking up the power consumption on the P4 datasheet. For example, a 2.2 GHz P4 with a 400 MHz FSB has a typical Vcc of 1.3725 Volts and Icc of 47.9 Amps which is (1.3725*47.9=) 65.74 watts. Since you know your loop of 10,000 algorithm cycles took 46.428570s, you assume a single loop will take 46.428570/10000 = 0.00454278s. The amount of energy consumed by your algorithm would then be 65.74 watts * 0.00454278s = 0.305 watt seconds (or joules).

To convert to kilowatt hours: 0.305 watt seconds * 1000 kilowatts/watt * 1 hour / 3600 seconds = 0.85 kwh. A utility company charges around $0.11 per kwh so this algorithm would cost 0.85 kwh * $0.11 = about a penny to run.

Keep in mind this is the CPU only...none of the rest of the computer.

Title: How do I maximize power consumption on a linux laptop from software? Id: 2063062, Count: 20 Tags: Answers: 11 AcceptedAnswer: 2063156 Created: 2010-01-14 08:56:37.0 Body:

I would like to maximize power consumption on a linux laptop (ubuntu with openbox) to discharge the battery as fast as possible. I would like it to be done purely from software without user intervention. Which strategies would be useful to do this? The ones I can think of are:

  1. Switch off power-save functionality (tips on how to do this are welcome).
  2. Switch off screensaver/blanking (no luck with -setterm blank 0, screen still blanks).
  3. Making the CPU and other hardware consume as much power as possible (ideas are welcome).

This is what I can think of. Are there any more obvious ways to to this?

Edit: To clarify a bit. I have to do this from within a program that runs under openbox. So it is a programming question. Especially p 3.

Popularity: 24.0 Answer #2063078, count #1, created: 2010-01-14 09:00:09.0

Just leave the computer compiling GNOME, KDE, OpenOffice.org and the Kernel in a queue. That should get the computer work at 100% for a few hours.

Maybe try installing Gentoo with all the packages you could think of, as that distro installs everything from source (compiles everything).

Answer #2063103, count #2, created: 2010-01-14 09:06:34.0

a busy loop is the best way. with so many multi-proc/multi-core cpus out there today, you might want to figure out how many busy loops to start so that each core is running at 100% - look in /proc/cpu and count the number of 'processor: ' lines.

a busy loop is a loop that runs forever doing nothing. for instance, in c you could do this:

#define EVER ;; int main() { for(EVER) {} } 
Answer #2063114, count #3, created: 2010-01-14 09:09:54.0

Lots of disk activity might help, too...

Answer #2063115, count #4, created: 2010-01-14 09:10:26.0

i would say, plugin some USB power sucking devices, eg. USB lamp ...

Answer #2063127, count #5, created: 2010-01-14 09:15:10.0

Start the wireless connection... with downloads/uploads

Answer #2063146, count #6, created: 2010-01-14 09:19:36.0

For Switch off power-save functionality, I suggest to steal some modification of powertop and reverse it.

Also you could do the reverse of doing reduced power usage like in this page :

https://wiki.ubuntu.com/ReducedPowerUsage

For that the CPU / Hard Drive / Graphic card work at 100% I suggest you to run like a benchmark test in a loop mode. Don't forget the ethernet and wifi.

Best Regards

Answer #2063156, count #7, created: 2010-01-14 09:20:10.0

I'm assuming you're talking mostly about CPU power consumption. It turns out there's actually quite a bit of work involved in doing this from software, because modern processors use a lot of aggressive clock gating, voltage scaling, and frequency scaling to turn off or slow down parts of the chip that aren't being used. Maximizing the power consumption for a particular processor is going to depend on how well you can keep all of these parts active. To do this you need to make sure you're issuing as many instructions as you can so that the chip uses all the functional units available to you.

I don't believe that Intel or AMD make the clock gating granularity of their chips public, so you'll have to guess. The best way you can do this is to grab a microarchitecture manual and see what the pipelines on your particular laptop's processor look like. For example, you want to know how many floating point instructions you can issue per cycle, how many FP units you have, etc. Likewise, figure out how many integer units you have, how many SSE instructions you can execute, etc. Then write a really tight loop that issues instructions for all these things. You could do this in raw assembly, or you could probably make do with C and SSE intrinsics. If you hit all the functional units on the die with those instructions, you should manage to maximize the CPU's power consumption.

For multicore and multi-socket machines, you will also want to make sure you're using all the processors you have, so run an instance of your tight loop on each core.

Finally, keep in mind that other things on your laptop use power. DMA accesses can cost you a pretty hefty amount of power, so if you can design a benchmark that exercises your CPU and issues lots of DMAs, you might be able to use more power. Likewise, if you've got a wireless card, doing any kind of hefty network transfers over that will run your battery down pretty fast.

There's a whole lot of work going on now in the field of power-aware computing, and people in that domain usually try to conserve power. But, if you're interested in more details on power consumption, you can probably find what you're looking through papers at some of the power-aware computing conferences.

Hopefully that helps. Good luck. It would be interesting to see what you come up with if you attempt to write something like this.

Answer #2063360, count #8, created: 2010-01-14 10:12:49.0

Install something like Prime95 or other distributed computing app of choice, they're quite intense on CPU time and execute proper instructions not just constant idle loops.

Answer #2063416, count #9, created: 2010-01-14 10:26:08.0

Don't forget about the GPU. You can keep it occupied with relatively few instructions from the CPU, but when stressed, it eats a lot of power. Here, tgamblin's answer about the CPU is probably applicable as well: you'll need to figure out what combinations of instructions to send to keep all processors on the GPU busy at all times.

Answer #2063431, count #10, created: 2010-01-14 10:29:37.0

Basically you will have to build a benchmarking suite to do this thing.

Benchmark the cpu to get around 40-80W or power consumed (be careful about multiprocessors) Benchmark the graphic backed(try Opengl + some sort of shaders) Use the wireless card, if available and do some big transfers. Benchmark the HDD, i guess random seeks should be more power-hungry than linear reads, so create lots of files, write to them, read them randomly, delete them, all this with DIRECT_IO set...

You may do this thing modular and test which of the test drain the battery more quicky. In Ubuntu you can see the power drained from the battery almoust instantly, so it should be easy and really interesting to see.

Please pot the results somewhere on the web when youre done:D

PS: some of the powersaving stuff is implemented by Gnome, not by X (if you use Ubuntu/Gnome) so take that into account. Try to fiddle with gconf

Answer #2094220, count #11, created: 2010-01-19 14:44:16.0

In my humble opinion you should write a program that uses your both CPU 100% and your GPU (Graphics Processing Unit) 100%. The GPU can actually cause more damage to battery lifespan then (trust a gamer :) ).

Title: How to reliably acquire location on the iPhone Id: 2178684, Count: 21 Tags: Answers: 1 AcceptedAnswer: null Created: 2010-02-01 17:28:33.0 Body:

I'm working on an application that uses the iPhone GPS to acquire a location track. To save power, I want to acquire location data while the screen is off. I've learned the trick of playing a silent audio file to keep the location acquisition going while in sleep mode. I still have an occasional problem with location acquisition stopping in sleep mode. This happens maybe 5 - 10% of the time. I suspect that I may not have a good enough location fix to start with, although the appearance of the current location annotation on the map view seems to imply a good and accurate fix. Does anyone know a good way to determine definitively that the location manager has a good fix? Does anyone know any tricks for "kicking" the location manager while in sleep mode to keep it going? I tried running a 30 second repeating timer and reading the current location each time it fired. Sadly, this didn't help. Any suggestions are appreciated.

-rich

Popularity: 6.0 Answer #2178809, count #1, created: 2010-02-01 17:48:13.0

You can check the returned data from the location manager for the horizontalAccuracy property, to see if the location is accurate enough or not.

Another way to turn off the display and save battery power without tricks is to enable the proximity sensor, and disable sleep. That way the display will blank when the phone is in your pocket, and light up when you pick it up to look at it.

Title: Tracking power use on Android Id: 2236041, Count: 22 Tags: Answers: 2 AcceptedAnswer: null Created: 2010-02-10 10:31:23.0 Body:

I have an application that keeps a long-standing network connection to a server.

I periodically ping the server to know if its still alive.

I imagine that it affects battery life, but other than trying to wall-clock time the time between charges, I don't have a good way of quantifying this.

Is there a mechanism for being told when the CPU 'wakes up', or when it wants to go to sleep?

Is there a standard way of doing long standing connections that minimises power consumption?

Popularity: 29.0 Answer #2237085, count #1, created: 2010-02-10 13:32:03.0

I imagine that it affects battery life, but other than trying to wall-clock time the time between charges, I don't have a good way of quantifying this.

Settings > About Phone > Battery use

Answer #2655429, count #2, created: 2010-04-16 18:54:49.0

You may find this app useful: http://powertutor.org/

Title: Algorithm to calculate the most energy efficient ad-hoc network Id: 2378519, Count: 23 Tags: Answers: 8 AcceptedAnswer: 2379604 Created: 2010-03-04 10:38:53.0 Body:

I have a (theoretical) network with N nodes, each with their own fixed location. Each node sends one message per cycle, which needs to reach the root either directly or via other nodes.

The energy cost of sending a message from node A to node B is the distance between them, squared.

The challenge is how to link these nodes, in a tree format, to produce the most energy efficient network.

E.g. Here are 2 possible ways to link these nodes, the left one being more energy efficient.

Node Diagram

I'm working on a Genetic algorithm to solve the problem, but I was wondering if anyone had any other ideas, or is aware of any relevant open-source code.

Edit: Another aspect of the network, which I forgot to mention, is that each node is battery powered. So if we have too many messages being routed via the same node, then that node's battery will become depleted, causing the network to fail. The network's energy efficiency is measured by how many messages can be successfully transmitted from each node to the root before any node runs out of battery.

Edit #2: I'm sorry about the ommission in the original text of the question. Becasue of it, some of your earlier answers aren't quite what I'm looking for, but I wasn't familiar with the MST algorithms, so thanks for telling me about them.

In the hope of making things clearer let me add this:

All nodes send one message of their own per cycle, including inner nodes. Inner nodes are also responsible for relaying any messages that they receive. This adds to the strain on their battery, is if they were sending an additional message of their own. The aim is to maximise the amount of cycles acheived before any node's battery dies.

Popularity: 30.0 Answer #2378563, count #1, created: 2010-03-04 10:44:35.0

I would think that you can construct the complete graph and then use Dijkstra's shortest path algorithm on each node to find the least cost route for that node. The union of these paths should form the minimal cost tree.

Note that with Dijkstra's algorithm it is also possible to calculate the entire tree in one pass.

Answer #2378575, count #2, created: 2010-03-04 10:46:12.0

minimum spanning tree? http://en.wikipedia.org/wiki/Minimum_spanning_tree

Answer #2379008, count #3, created: 2010-03-04 11:54:21.0

You can try formulating the problem as a minimum-cost maximum-flow problem. Just some ideas:

Create an additional dummy node as the source, and connect edges of zero cost and capacity 1 from this node to every non-root node. Then set the root at the sink, and set all edge costs as you want (the square of the Euclidean distance, in this case).

If you want to also account for energy efficiency, you can try to add a weight for it into the edge costs going into each node. I'm not sure how else you can do it, since you're trying to optimize two objectives (cost of message sending and energy efficiency) at the same time.

Answer #2379503, count #4, created: 2010-03-04 13:19:26.0

This is not just a minimum spanning tree, because the weight of each edge is dependent on the weight of other edges. Also, you need not to minimize the sum of weights but the maximum weight on a single node, which is the weight of its output edge, multiplied by the number of incoming edges plus one.

Each node will have to transmit a number of messages, but if you route messages from outer nodes through inner nodes, the inner nodes will transmit a higher number of messages. In order to equalize the battery drain over all nodes, the inner nodes will have to use much shorter connections than the outer nodes; I suspect that this dependency on the distance from the root is exponential.

In your examples, it is not so clear whether the left one is more efficient by the measure you gave (maximum number of messages), because while the node at (1,2) does have less energy consumption, the one at (0,1) doubles its output.

I believe that you have to start with some tree (e.g. the one formed by having each node transmit directly to the root node) and then do a number of optimization steps.

The optimization might be possible deterministically or through a statistical method like simulated annealing or a genetic algorithm.

A deterministic method would perhaps try to find an improvement for the current worst node, such that the new node weights are smaller than the current maximum node weight. It is difficult to do this in such a way that the result is the global optimum.

Simulated annealing would mean to change a number of nodes' targets at each step, but this might be hampered by the fact that the "goodness" of a configuration is determined by its worst node. You would need to make sure that the worst node is sufficiently often affected in the candidate children, which might be difficult when the temperature drops.

In a genetic algorithm, you would design the "genome" as a mapping of each node to its target node. A punctual mutation would consist of changing one node's target to a random node (but only the root and nodes that are closer than the root should be considered).

Answer #2379604, count #5, created: 2010-03-04 13:35:12.0

Without taking account the batteries minimization, what you're looking for is the Shortest Path Spanning Tree, which is kind of similar to the Minimum Spanning Tree, except for with a different "cost" function. (You can just run Dijkstra's Algorithm to calculate the Shortest Path Spanning Tree, since the cost seems to always be positive.)

This does not take account the battery minimization though. In that case, (and I'm not quite sure what it is that you're trying to minimize first) you might want to look into Min-cost max flow. However, this will optimize (maximize) the "flow" first, before minimizing the cost. This might or might not be ideal.

To create the vertex constraint (each node can only operate k messages), you just need to create a second graph G_1 where you add an additional vertex u_i for each v_i - and having the flow v_i to u_i be whatever your constraint be, in this case, k+1, with the cost 0. So basically if there is an edge (a,b) in the original graph G, then in G_1, there will be an edge (u_a,v_b) for each of these edges. In effect, you're creating a second layer of vertices that constraints the flow to k. (Special case for the root, unless you also want a message constraint on the root.)

The standard max-flow solution on G_1 should suffice - a source that points to each vertex with a flow of 1, and a sink that is connected to the root. There is a solution for G_1 (varying on k) if the maxflow of G_1 is N, the number of vertices.

Answer #2379671, count #6, created: 2010-03-04 13:44:10.0

I'm wondering if you are using a dynamic wireless sensor network (composed of Telos sensors, for instance)? If this is the case, you're going to want to develop a distributed min-distance protocol rather than something more monolithic like Dijkstra.

I believe you can use some principles from an AHODV (http://moment.cs.ucsb.edu/AODV/aodv.html) protocol, but keep in mind that you'll need to augment the metric somehow. Hop count has a lot to do with energy consumption, but at the same time, you need to keep in mind how much power is being used to transmit a message. Perhaps a start of a metric might be the sum of all power usages at each node on a given path. When your code is setting your network up then, you simply keep track of the path cost for a given "direction" of routing and let your distributed protocol do the rest at each node.

This gives you the flexibility to toss a bunch of sensors in the air and wherever they land the network will converge on the optimal energy configuration for message passing.

Answer #2383959, count #7, created: 2010-03-05 01:27:21.0

I worked on a similar problem, but with wireless sensors. We used PEGASIS (Power Efficient Gathering in Sensor Information System), which is an energy-efficient protocol. http://www.mast.queensu.ca/~math484/projects/pastprojects/2005/presentation05_Yi_Wei.ppt [http://www.technion.ac.il/~es/Professional/Routing_Protocols_for_Sensor_Networks.ppt][2]

Answer #2393154, count #8, created: 2010-03-06 16:19:29.0

Have you considered using a directed acyclic graph instead of a tree? In other words, each node has multiple "parents" that it can send messages to -- the acyclic requirement ensures that all messages eventually arrive. I ask because it sounds like you have a wireless network and because there's an efficient approach to computing the optimum solution.

The approach is linear programming. Let r be the index of the root node. For nodes i, j, let cij be the energy cost of sending a message from i to j. Let xij be a variable that will represent the number of messages sent by node i to node j in each time step. Let z be a variable that will represent the maximum rate of energy consumption at each node.

The linear program is

minimize z subject to # the right hand side is the rate of energy consumption by i (for all i) z >= sum over all j of cij * xij # every node other than the root sends one more message than it receives (for all i != r) sum over all j of xij == 1 + sum over all j of xji # every link has nonnegative utilization (for all i, j) xij >= 0 

You can write a code that generates this LP in something very much like this format, whereupon it can be solved by an off-the-shelf LP solver (e.g., the free GLPK).

There are a couple of features of the LP worth mentioning. First, it may seem odd that we haven't "forced" messages to go to the root. It turns out that as long as the cij constants are positive, it just wastes energy to send messages in cycles, so there's no point. This also ensures that the directed graph we've constructed is acyclic.

Second, the xij variables are in general not integers. How do we send half a message? One possible solution is randomization: if M is the total rate of messages sent by node i, then node i sends each message to node j with probability xij/M. This ensures that the averages work out over time. Another alternative is to use some sort of weighted round-robin scheme.

Title: configuring gps module ssf1513 Id: 2476546, Count: 24 Tags: Answers: 2 AcceptedAnswer: null Created: 2010-03-19 10:43:29.0 Body:

hi i am developing code for gps tracker using gps module ssf1513. i don't know how to configure the gps module to power save mode , please guide me how to enter in input mode.

Popularity: 3.0 Answer #2476670, count #1, created: 2010-03-19 11:04:36.0

It my be that the power save mode is automatic, e.g. if you stop polling the device then it may go into power saving mode after a certain time-out, and then wake up again automatically when you start polling it again ?

Answer #2484368, count #2, created: 2010-03-20 19:16:34.0

That board has a SiRF starIII GSC3e/LPx GPS chip.

You can Communicate with it via SiRF's binary protocol or NMEA here are links to the reference manuals for each:

SiRF NMEA Reference Manual

SiRF Binary Reference Manual

How exactly you want to save power is up to you there are tons of ways to reduce power usage with gps (duty cycle control, long sleeps, etc). This will be application dependent.

Title: Has anyone worked with EnergyPlus simulation software? Id: 2539835, Count: 25 Tags: Answers: 2 AcceptedAnswer: 5471047 Created: 2010-03-29 17:12:01.0 Body:

http://apps1.eere.energy.gov/buildings/energyplus/

I am researching about this software at the moment and I am wondering :

  1. How many people actually know how to use this software? Please identify yourself if you do.
  2. How many companies are using this to run energy saving simulations at the moment? Please list any you know.
  3. Is it integrable with a GUI environment? Has anyone have experienced in implementing the integrations?

Any response welcomed. Thanks.

Popularity: 8.0 Answer #5471047, count #1, created: 2011-03-29 10:25:19.0

I am a collaborative developer for the EnergyPlus engine through a research project. It is one of the most robust building energy simulation engines built on 30+ years of building science research. Unfortunately it isn't widely adopted due to a lack of a good, free GUI - however, this is changing with two DOE-sponsored GUI Projects:
http://openstudio.nrel.gov/
http://openrevit.com/2011/03/the-holy-grail-energyplus-gui-with-bim-integration/

EnergyPlus is widely used the research field for building energy and will most likely become a player in the building design and operations. Open Studio is an open source project (thus the reason for its name) developed in Ruby.

Answer #9997005, count #2, created: 2012-04-03 15:56:10.0
  1. Energyplus has become an engine as of late due to its lack of interface to some extent with the out-of-the-box install. Bentley AecoSIM, OpenStudio, DesignBuilder and forthcoming Simergy applications all use Energyplus as a calculation engine. Additionally the DIVA and Gherilla projects offer integration with Rhino/Grasshopper. Web implementations are available at http://modelmaker.nrel.gov/
  2. Countless companies are using the application in one form or another. Again, most probably through a separate interface but IBPSA would be a good resource for users of the software and the companies they work for.
  3. Its sole purpose was to make it integrate into a GUI or to be called as the means of primary calculations. Have a read through the development guide that come with the installation. Additionally, a more open source license structure has recently been implemented.
Title: "Work stealing" vs. "Work shrugging"? Id: 2552810, Count: 26 Tags: Answers: 6 AcceptedAnswer: null Created: 2010-03-31 12:25:52.0 Body:

Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy?

By "work-shrugging" I mean pushing surplus work away from busy processors onto less loaded neighbours, rather than have idle processors pulling work from busy neighbours ("work-stealing").

I think the general scalability should be the same for both strategies. However I believe that it is much more efficient, in terms of latency & power consumption, to wake an idle processor when there is definitely work for it to do, rather than having all idle processors periodically polling all neighbours for possible work.

Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome.

Clarification

I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do.

I dont think the decision logic would require much more than an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours. (I am assuming "lock-free, wait-free" queues in both strategies).

Resolution

It seems that what I meant (but only partially described!) as "Work Shrugging" strategy is in the domain of "normal" upfront scheduling strategies that happen to be smart about processor, cache & memory loyality, and scaleable.

I find plenty of references searching on these terms and several of them look pretty solid. I will post a reference when I identify one that best matches (or demolishes!) the logic I had in mind with my definition of "Work Shrugging".

Popularity: 49.0 Answer #2553049, count #1, created: 2010-03-31 13:00:23.0

Work stealing, as I understand it, is designed for highly-parallel systems, to avoid having a single location (single thread, or single memory region) responsible for sharing out the work. In order to avoid this bottleneck, I think it does introduce inefficiencies in simple cases.

If your application is not so parallel that a single point of work distribution causes scalability problems, then I would expect you could get better performance by managing it explicitly as you suggest.

No idea what you might google for though, I'm afraid.

Answer #2554154, count #2, created: 2010-03-31 15:18:05.0

I think the problem with this idea is that it makes the threads with actual work to do waste their time constantly looking for idle processors. Of course there are ways to make that faster, like have a queue of idle processors, but then that queue becomes a concurrency bottleneck. So it's just better to have the threads with nothing better to do sit around and look for jobs.

Answer #2555187, count #3, created: 2010-03-31 17:45:10.0

The basic advantage of 'work stealing' algorithms is that the overhead of moving work around drops to 0 when everyone is busy. So there's only overhead when some processor would otherwise have been idle, and that overhead cost is mostly paid by the idle processor with only a very small bus-synchronization related cost to the busy processor.

Answer #2555276, count #4, created: 2010-03-31 18:01:24.0

Load balancing is not free; it has a cost of a context switch (to the kernel), finding the idle processors, and choosing work to reassign. Especially in a machine where tasks switch all the time, dozens of times per second, this cost adds up.

So what's the difference? Work-shrugging means you further burden over-provisioned resources (busy processors) with the overhead of load-balancing. Why interrupt a busy processor with administrivia when there's a processor next door with nothing to do? Work stealing, on the other hand, lets the idle processors run the load balancer while busy processors get on with their work. Work-stealing saves time.

Example

Consider: Processor A has two tasks assigned to it. They take time a1 and a2, respectively. Processor B, nearby (the distance of a cache bounce, perhaps), is idle. The processors are identical in all respects. We assume the code for each task and the kernel is in the i-cache of both processors (no added page fault on load balancing).

A context switch of any kind (including load-balancing) takes time c.

No Load Balancing

The time to complete the tasks will be a1 + a2 + c. Processor A will do all the work, and incur one context switch between the two tasks.

Work-Stealing

Assume B steals a2, incurring the context switch time itself. The work will be done in max(a1, a2 + c) time. Suppose processor A begins working on a1; while it does that, processor B will steal a2 and avoid any interruption in the processing of a1. All the overhead on B is free cycles.

If a2 was the shorter task, here, you have effectively hidden the cost of a context switch in this scenario; the total time is a1.

Work-Shrugging

Assume B completes a2, as above, but A incurs the cost of moving it ("shrugging" the work). The work in this case will be done in max(a1, a2) + c time; the context switch is now always in addition to the total time, instead of being hidden. Processor B's idle cycles have been wasted, here; instead, a busy processor A has burned time shrugging work to B.

Answer #2556545, count #5, created: 2010-03-31 21:19:15.0

So, by contrast to "Work Stealing", what is really meant here by "Work Shrugging", is a normal upfront work scheduling strategy that is smart about processor, cache & memory loyalty, and scalable.

Searching on combinations of the terms / jargon above yields many substantial references to follow up. Some address the added complication of machine virtualisation, which wasn't infact a concern of the questioner, but the general strategies are still relevent.

Answer #2559300, count #6, created: 2010-04-01 09:14:35.0

Some issues... if a busy thread is busy, wouldn't you want it spending its time processing real work instead of speculatively looking for idle threads to offload onto?

How does your thread decide when it has so much work that it should stop doing that work to look for a friend that will help?

How do you know that the other threads don't have just as much work and you won't be able to find a suitable thread to offload onto?

Work stealing seems more elegant, because solves the same problem (contention) in a way that guarantees that the threads doing the load balancing are only doing the load balancing while they otherwise would have been idle.

It's my gut feeling that what you've described will not only be much less efficient in the long run, but will require lots of of tweaking per-system to get acceptable results.

Though in your edit you suggest that you want submitting processor to handle this, not the worker threads as you suggested earlier and in some of the comments here. If the submitting processor is searching for the lowest queue length, you're potentially adding latency to the submit, which isn't really a desirable thing.

But more importantly it's a supplementary technique to work-stealing, not a mutually exclusive technique. You've potentially alleviated some of the contention that work-stealing was invented to control, but you still have a number of things to tweak before you'll get good results, these tweaks won't be the same for every system, and you still risk running into situations where work-stealing would help you.

I think your edited suggestion, with the submission thread doing "smart" work distribution is potentially a premature optimization against work-stealing. Are your idle threads slamming the bus so hard that your non-idle threads can't get any work done? Then comes the time to optimize work-stealing.

Title: How to achieve a specific fraction(say 80%) of the cpus and balanced over them Id: 2594550, Count: 27 Tags: Answers: 3 AcceptedAnswer: null Created: 2010-04-07 17:28:25.0 Body:

I was wondering if it would be possible to run app not at 100% of the cpu but at a specific amount of the cpus. I see different usage of this ,

  • we can better balance concurrent application ( we may want to have balance app 50% to have fair apps/agent/... )
  • i was also wondering if the power consumption would not be better if the cpus doesnt run at full throttle but at some lower level( say 80% )

What are your thoughts Thx examples are welcomed :)

Popularity: 7.0 Answer #2594719, count #1, created: 2010-04-07 17:57:27.0

You can readily do this. Go to your BIOS and lower the frequency to the desired percent and while you are there, you may be able to lower the Vcore voltage of your CPU as well.
Core Parking* seems to be already there in Windows 7 and Windows Server 2008 R2. To quote:

"Core parking is a new feature that dynamically selects a set of processors that should stay idle and not run any threads based on the current power policy and their recent utilization. The scheduler will attempt to honor this selection when it decides on which processors to run threads, allowing the parked cores to enter deep idle states where they consume very little power."

(If, one wants a scheduler that achieves lower peak temperatures for processing cores, it has been done as well, for Linux. )

*It seems like there is a patent as well: Power-aware thread scheduling and dynamic use of processors

Answer #2594740, count #2, created: 2010-04-07 18:02:07.0

The power consumption can be made worse by not running the CPUs at full throttle, if they otherwise already throttle themselves. Many CPUs nowadays support downclocking based on load, to the point that they use almost no power if they're allowed to sleep. The best strategy for getting to sleep quickly is to get whatever work is necessary done as quickly as possible. If you artificially throttle to 80%, then your CPU is awake longer, which will eat the power savings you get from running at a lower clock speed. On the other hand, if you know the CPU will be busy all the time no matter what, then it'll never get put to sleep anyway.

Answer #2594770, count #3, created: 2010-04-07 18:06:33.0

This job belongs to the operating system, not the application.

Some operating systems support segmentation or control groups, or zones or containers. Whatever they call them, they allow placing limits on an application's use of resources.

Title: My OpenCL kernel is slower on faster hardware.. But why? Id: 2620599, Count: 28 Tags: Answers: 5 AcceptedAnswer: null Created: 2010-04-12 08:02:09.0 Body:

As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you.

We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images).

So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds.

I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!!

But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one.

Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M..

For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out..

Anyone has a clue?

edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip

cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results 
Popularity: 51.0 Answer #2622452, count #1, created: 2010-04-12 13:38:48.0

I ran into the same issue when I was testing out OpenCL on my MacBook. I believe it's because the GeForce 9400M has a higher bus speed to the main memory bank than the Geforce 9600M GT. So even though the Geforce 9600M GT has much more power than the GeForce 9400M the time required to copy the memory to the GPU is too long to see the benefit of the more powerful GPU on your situation. It could also be caused by inappropriate worker group sizes.

Also I found this site very helpful in my OpenCL experience.

http://www.macresearch.org/opencl

Answer #2629364, count #2, created: 2010-04-13 12:13:46.0

The performance is not the only difference between a GeForce 9400M and a Geforce 9600M GT. A big one is that one is a discrete GPU. With this come a slew of differences, amongst which the following can have an impact:

  • tendency of drivers to batch more commands
  • memory is not uniform. the GPU generally only accesses its own memory, and the driver moves memory back and forth over the PCI-E bus.

I'm sure I'm missing some...

Here are a bunch of ideas you can try:

  • avoid calling clFinish. The way you call it between the memory load and the execution forces the driver to do more work than necessary. It stalls the GPU.
  • profile your code to see what is taking the time. I'm not aware of support for CL performance analysis yet, but with your clFinish calls, it gives you a 1st order estimate by simply measuring the CPU side. Note that it's hard in general to distinguish what is due to latency and what is due to throughput.
Answer #2866876, count #3, created: 2010-05-19 15:21:28.0

I get the same results, and I'm unsure why. My kernel involves very minimal copying to/from (I presend all needed data for all kernel calls, and only return a 512x512 image). It's a raytracer, so the kernel work vastly outweighs the copy back (400+ms to 10ms). Still, the 9600M GT is about 1.5x-2x slower.

According to nVidia's listing, the 9600M GT should have 32 SPs (twice the number of the 9400M). It's presumably clocked higher too.

The 9600M GT does seem faster in some cases, e.g. games. See these links: http://www.videocardbenchmark.net/video_lookup.php?cpu=GeForce+9600M+GT http://www.videocardbenchmark.net/video_lookup.php?cpu=GeForce+9600M+GT

According to ars technica:

Furthermore, an interesting tidbit about Snow Leopard's implementation is revealed by early tests. Though Snow Leopard doesn't seem to enable dual GPUs or on-the-fly GPU switching for machines using the NVIDIA GeForce 9400M chipset—a limitation carried over from Leopard—it does appear that the OS can use both as OpenCL resources simultaneously. So even if you have the 9600M GT enabled on your MacBook Pro, if OpenCL code is encountered in an application, Snow Leopard can send that code to be processed by the 16 GPU cores sitting pretty much dormant in the 9400M. The converse is not true, though—when running a MacBook Pro with just the 9400M enabled, the 9600M GT is shut down entirely to save power, and can't be used as an OpenCL resource.

This seems to be the opposite of what we are seeing. Also, I am explicitly setting up a CL context on only one device at a time.

There are some suggestions in the ars forums that the 9600M GT doesn't support doubles as well, which would explain this problem. I might try to write up a synthetic benchmark to test this hypothesis.

Answer #4300472, count #4, created: 2010-11-29 02:46:42.0

The 9400M is integrated to your memory controller whereas the 9600M GT is a discrete card that is connected to your memory controller via PCI-e bus. This means that when you transfer memory to the 9400M it just allocates it into the System RAM. The 9600M on the other hand sends the data over the PCI-e to the dedicated graphics memory on the card. This transfer is what making your benchmark seem slower.

If you would like to compare the performance of the two graphics cards you should use the OpenCL profiling function instead of the clock function you are currently using.

cl_int clGetEventProfilingInfo (cl_event event, cl_profiling_info param_name, size_t param_value_size, void *param_value, size_t *param_value_size_ret)

Pass the function the event that was created when you were enqueueing the Kernel and pass it the CL_PROFILING_COMMAND_START for the second argument to get the starting point of the Kernel in nanoseconds and CL_PROFILING_COMMAND_END to get the ending point of the kernel. Make sure to use this command AFTER the execution of the kernel has finished(the events hold their values until they go out of scope.) You can also get the time it took to transfer the data to the device by applying this function to the events from the enqueueing of the buffer. Here is an example:

 TRACE("Invoking the Kernel") cl::vector<cl::Event> matMultiplyEvent; cl::NDRange gIndex(32,64); cl::NDRange lIndex(16,16); err = queueList["GPU"]->enqueueNDRangeKernel( matrixMultiplicationKernel, NULL, gIndex, lIndex, &bufferEvent, matMultiplyEvent); checkErr(err, "Invoke Kernel"); TRACE("Reading device data into array"); err = queueList["GPU"]->enqueueReadBuffer(thirdBuff, CL_TRUE, 0, (matSize)*sizeof(float), testC, &matMultiplyEvent, bufferEvent); checkErr(err, "Read Buffer"); matMultiplyEvent[0].wait(); for (int i = 0; i < matSize; i++) { if (i%64 == 0) { std::cout << "\n"; } std::cout << testC[i] << "\t"; } long transferBackStart = bufferEvent[0].getProfilingInfo<CL_PROFILING_COMMAND_START>(); long transferBackEnd = bufferEvent[0].getProfilingInfo<CL_PROFILING_COMMAND_END>(); double transferBackSeconds = 1.0e-9 * (double)(transferBackEnd- transferBackStart); long matrixStart = matMultiplyEvent[0].getProfilingInfo<CL_PROFILING_COMMAND_START>(); long matrixEnd = matMultiplyEvent[0].getProfilingInfo<CL_PROFILING_COMMAND_END>(); double dSeconds = 1.0e-9 * (double)(matrixEnd - matrixStart); 

This example uses the C++ wrapper but the concept should be the same.

Hope this helps.

Answer #5257090, count #5, created: 2011-03-10 08:24:42.0

I'm new to OpenCL, so I may be a bit naive, but I doubt you needed to go into the energy saver panel to switch the OpenCL compute device. I believe that you choose the device when setting up the OpenCL context in your code.

My hypothesis: 1) When you run your code without disabling your integrated GPU first, OpenCL chooses your discrete GPU as the compute device. Your code runs on the (fast) discrete GPU. 2) When you disable the integrated GPU first, you force the load of running the OS X GUI onto your discrete card. When you run your code, it runs on the discrete GPU, but it contends with your GUI for resources.

This answer is coming 11 months after the question was asked, but hopefully it'll be useful to someone...

Title: Stop Core Location updates then restart them with a timer Id: 2698361, Count: 29 Tags: Answers: 1 AcceptedAnswer: 2699381 Created: 2010-04-23 12:17:28.0 Body:

I was wondering if anyone could point me to (or paste in) some code to deal with turning off Core Location updates to save power.

As far as I understand it, you should stop Core Location updates as soon as you get a reading of desired accuracy. If you don't get a good accuracy reading after a certain time, you should also stop updates (presumably using a timer). Every time you stop updates, you should fire a timer (around 60 seconds) to restart Core Location and get a new reading.

Is there Apple code which does all this? The LocateMe, TaggedLocations and Locations sample code don't seem to do it.

Popularity: 15.0 Answer #2699381, count #1, created: 2010-04-23 14:35:06.0

The LocateMe example has the code you need. You just need to create a second Selector to Fire. LocateMe calls the following in it's setup method...

 [self performSelector:@selector(stopUpdatingLocation:) withObject:@"Timed Out" afterDelay:[[setupInfo objectForKey:kSetupInfoKeyTimeout] doubleValue]]; 

It says that after a certain amount of time (kSetupInfoKeyTimeout), please call the stopUpdatingLocation method the the argument of NSString = "Timed Out". Inside the stopUpdatingLocation method, the [locationManager stopUpdatingLocation] is called to tell CoreLocation to stop.

So, all you need to do is add another Selector like this...

[self performSelector:@selector(timeToRestartCoreLocation) afterDelay: 60]; 

inside the stopUpdatingLocation method, which will call the timeToRestartCoreLocation method after 60 seconds. Then inside your timeToRestartCoreLocation method, call [locationManager startUpdatingLocation] to kick off CoreLocation again.

Title: Source code of System idle process Id: 2835629, Count: 30 Tags: Answers: 1 AcceptedAnswer: null Created: 2010-05-14 16:04:32.0 Body:

Just out of interest: what is the source code of system idle process? Which instructions are executed? How is CPU managed to enter power saving mode?

Popularity: 8.0 Answer #2838777, count #1, created: 2010-05-15 03:12:47.0

System Idle Process continuously executes KiIdleLoop, with one thread for each processor. You can see this using a process viewer such as Process Explorer. This function essentially checks the Deferred Procedure Call (DPC) list and executes any pending items (e.g. for timers and hardware components). It then calls power management (PoIdle) which calls the HAL (HalProcessorIdle) so "power saving mode" can be entered. This, on x86 systems, simply consists of enabling interrupts (sti) and then the hlt instruction.

Title: Does iPhone automatically switch existing 3G connections to WiFi? Id: 2873717, Count: 31 Tags: Answers: 2 AcceptedAnswer: null Created: 2010-05-20 12:38:54.0 Body:

On the iPhone, if you are currently in a TCP connection with a remote peer using 3G. If the user moves to a place with Wifi connectivity, does the iPhone OS automatically modify your connection to go through WiFi instead (assuming remote peer is also accessible via Wifi)? I ask because the docs point out that WiFi is preferred over 3G due to lower power consumption.

Popularity: 15.0 Answer #7036490, count #1, created: 2011-08-12 06:42:08.0

Yes, it does, if it sees a Wifi network you have already connected on. But you can tell the iPhone to "forget" that Wifi connection, then it will kepp 3G even when entering a zone where thiw Wifi connection is available.

For example, as soon as I go back home, the iPhone (iOS 4) switches on my Wifi network. If I go out, it goes to 3G, and when I'm in a zone with a known connection (Orange Wifi), it switches to it.

Answer #7164950, count #2, created: 2011-08-23 17:16:34.0

The answer is no, there's nothing automatic here. If you connected on WWAN while there was no wifi connectivity, you'll stay on that WWAN connection, even when wifi becomes available. I tested this on iOS 4.2, and I don't believe there's anything different in iOS 5. I'd even argue that it couldn't realistically work any other way, period.

You can use the Reachability API to check if wifi is available and reconnect yourself.

Title: Android Nexus One - Can I save energy with color scheme? Id: 2902382, Count: 32 Tags: Answers: 4 AcceptedAnswer: 2902895 Created: 2010-05-25 06:09:02.0 Body:

I'm wondering what color-scheme is more energy-saving for AMOLED display?

I've already decided to manage c-scheme according to ambient light, thanks to this post:

Somewhat-proof, the link posted by nickf: Ironic Sans: Ow My Eyes. If you read that in a well lit room, the black-on-white will be the most pleasant to read. If you read it in a dark room, the white-on-black will be nicer.

But if I want to save battery power, should I use bright content with dark background or vice versa?

Is it possible anyway (they say it's not working for simple LCD)?

Popularity: 28.0 Answer #2902405, count #1, created: 2010-05-25 06:14:13.0

Well, that wikipedia article you linked to says:

For example, our measurement shows that a commercial QVGA OLED display consumes 3 and 0.7 Watts showing black text on a white background and white text on a black background, respectively.

So according to that, a white-on-black scheme would use less power than a black-on-white scheme.

The AnandTech article you linked to is talking about regular LCD monitors, which is quite different technology to AMOLED.

I guess the best thing to do is give it a try: try on one colour scheme and see how long you can go between charges, then try on a different scheme.

Answer #2902895, count #2, created: 2010-05-25 07:51:37.0

Yes, you can. The best you can do is use a red on black color scheme. Blue is more expensive than green, green more than red. White is the worst :)

To give you an idea, a static blue wallpaper (for instance a jellyfish in an aquarium) consumes more battery than the 3D galaxy live wallpaper.

Answer #3411541, count #3, created: 2010-08-05 03:28:55.0

Black!! I Google in black on my phone at http://bGoog.com to make my battery last longer. Since using black backgrounds I recharge my phone a lot less! There's info on it at bGoog.com/about

Answer #3411627, count #4, created: 2010-08-05 03:52:48.0

The more black on your screen, the better. Black on black would save a whole lot of power on OLED screens, but is not too readable. So you find a balance between readability and power saving, with as much black as possible.

In order from least to most power:

  1. All black
  2. Single colour (eg red) text on black
  3. Compound colour (eg yellow, cyan, white) text on black
  4. Any background colour other than black

Note that none of this applies to LCD screens, only OLED. For LCD, the difference is negligible to the point you can forget about it. Sometimes, all-white even uses slightly less power, but it is nowhere near as much difference as with OLED.

Title: Can we optimize code to reduce power consumption? Id: 2905958, Count: 33 Tags: Answers: 9 AcceptedAnswer: 2906062 Created: 2010-05-25 15:13:25.0 Body:

Are there any techniques to optimize code in order to ensure lesser power consumption.Architecture is ARM.language is C

Popularity: 37.0 Answer #2906000, count #1, created: 2010-05-25 15:19:16.0

Optimizing code to use less power is, effectively, just optimizing code. Regardless of whether your motives are monetary, social, politital or the like, fewer CPU cycles = less energy used. What I'm trying to say is I think you can probably replace "power consumption" with "execution time", as they would, essentially, be directly proportional - and you therefore may have more success when not "scaring" people off with a power-related question. I may, however, stand corrected :)

Answer #2906019, count #2, created: 2010-05-25 15:21:13.0

If the processor is tuned to use less power when it needs less cycles, then simply making your code run more efficiently is the solution. Else, there's not much you can do unless the operating system exposes some sort of power management functionality.

Answer #2906045, count #3, created: 2010-05-25 15:24:22.0

Keep IO to a minimum.

Answer #2906062, count #4, created: 2010-05-25 15:26:07.0

From the ARM technical reference site:

The features of the ARM11 MPCore processor that improve energy efficiency include:

  • accurate branch and sub-routine return prediction, reducing the number of incorrect instruction fetch and decode operations
  • use of physically addressed caches, which reduces the number of cache flushes and refills, saving energy in the system
  • the use of MicroTLBs reduces the power consumed in translation and protection lookups each cycle
  • the caches use sequential access information to reduce the number of accesses to the tag RAMs and to unwanted data RAMs.

In the ARM11 MPCore processor extensive use is also made of gated clocks and gates to disable inputs to unused functional blocks. Only the logic actively in use to perform a calculation consumes any dynamic power.

Based on this information, I'd say that the processor does a lot of work for you to save power. Any power wastage would come from poorly written code that does more processing than necessary, which you wouldn't want anyway. If you're looking to save power, the overall design of your application will have more effect. Network access, screen rendering, and other power-hungry operations will be of more concern for power consumption.

Answer #2906067, count #5, created: 2010-05-25 15:26:29.0

On some ARM processors it's possible to reduce power consumption by putting the voltage regulator in standby mode.

Answer #2906080, count #6, created: 2010-05-25 15:28:02.0

Yes. Use a profiler and see what routines are using most of the CPU. On ARM you can use some JTAG connectors, if available (I used Lauterbach both for debugging and for profiling). The main problem is generally to put your processor, when in idle, in a low-consumption state (deep sleep). If you cannot reduce the CPU percentage used by much (for example from 80% to 50%) it won't make a big difference. Depending on what operating systems you are running the options may vary.

Answer #2906085, count #7, created: 2010-05-25 15:28:42.0

If you are not running Windows XP+ or a newer version of Linux, you could run a background thread which does nothing but HLT.

This is how programs like CPUIdle reduce power consumption/heat.

Answer #2906103, count #8, created: 2010-05-25 15:30:29.0

The July 2010 edition of the Communications of the ACM has an article on energy-efficient algorithms which might interest you. I haven't read it yet so cannot impart any of its wisdom.

Answer #2909735, count #9, created: 2010-05-26 02:08:30.0

Try to stay in on chip memory (cache) for idle loops, keep I/O to a minimum, keep bit flipping to a minimum on busses. NV memory like proms and flash consume more power to store zeros than ones (which is why they erase to ones, it is actually a zero but the transitor(s) invert the bit before you see it, zeros stored as ones, ones stored as zeros, this is also why they degrade to ones when they fail), I dont know about volatile memories, dram uses half as many transistors as sram, but has to be refreshed.

For all of this to matter though you need to start with a lower power system as the above may not be noticeable. dont use anything from intel for example.

Title: how to calculate power consumption on an Android mobile that uses wifi? Id: 2960319, Count: 34 Tags: Answers: 3 AcceptedAnswer: null Created: 2010-06-02 18:12:11.0 Body:

I have implemented a routing protocol on an Android 1.6 mobile that uses wireless (ad-hoc) network in order to exchange messages. Now I would like to evaluate it under an energy consumption point of view, the base would be to try to calculate the energy wasted to transmit a single packet, do anybody has any idea how to do that? Software/hardware solutions are welcome! Thanx :)

Popularity: 36.0 Answer #2960685, count #1, created: 2010-06-02 18:55:03.0

I believe that there are a number of apps available that will track the power consumption of an application. You could use one of those to observe your application during an intense test.

Edit: One app to try would be power tutor. Note that as it says though, power consumption is likely to vary a lot depending on the actual hardware. You might want to test it out on a few different platforms to see the differences.

Answer #2976636, count #2, created: 2010-06-04 18:11:34.0

I don't believe that any application will give you any form of meaningful result. You really need to be looking at a hardware solution, and it will likely need to be home-brew.

The set-up I have used for power measurement on mobiles consists of the following:

  • Regulated power supply: this should be capable of delivering at least 3 amps.
  • A sampling ammeter. You will probably need to design this for yourself, using a small value precision resistor and an ADC to measure the voltage drop over the resistor (which will give you the current)
  • The ADC can be a proprietary acquisition card - these are readily available from sources like RS, depending on your location. It probably wants to have a sampling rate of around 1kHz for this application.
  • Software for capturing data from the ADC. National Instruments LabView is most often used for this sort of application. Most decent acquisition cards have LabView support, although you can use anything you like. They nearly all have C APIs as well.
  • A dummy battery to enable this configuration to be connected to the phone. The easiest way to do this is normally to trash a real battery (carefully, if it's Li-Ion!).

When you are designing the system, remember that the voltage drop across a small value resistor for a 6 or 9 volt supply will be very small, so your ADC needs to be pretty sensitive for you to get meaningful results.

Once you have all of this, you'll be able to observe device current over time. You will find that this varies far more than you may expect. The phone will be turning on and off all sorts of circuitry all of the time. In particular, you will see fairly large power peaks when the cellular network is being accessed.

After a bit of investigation, you'll be able to see when the WiFi is powered up, and the transmit bursts in particular.

Good luck.

Answer #14978184, count #3, created: 2013-02-20 11:00:28.0

Yes, PowerTutor is one of the most reliable application produced in result of a PhD thesis in University of Michigan. PowerTutor gives you energy consumption of a all running application on Android. However, this application is working more accurate on HTC G1, HTC G2 and Nexus one since it is developed for such products.

Title: How to do power save on a ARM-based Embedded Linux system? Id: 3092498, Count: 35 Tags: Answers: 2 AcceptedAnswer: 3145367 Created: 2010-06-22 11:09:49.0 Body:

I plan to develop a nice little application that will run on an arm-based embedded Linux platform; however, since that platform will be battery-powered, I'm searching for relevant information on how to handle power save.

It is kind of important to get decent battery time.

I think the Linux kernel implemented some support for this, but I can't find any documentation on this subject.

  • Any input on how to design my program and the system is welcome.

  • Any input on how the Linux kernel tries to solves this type of problem is also welcome.

Other questions:

  • How much does the program in user space need to do?

  • And do you need to modify the kernel?

  • What kernel system calls or APIs are good to know about?


Update:

It seems like the folks involved with the "Free Electrons" site have produced some nice presentations on this subject.

But maybe someone else has even more information on this subject?


Update:

It seems like Adam Shiemke's idea to go look at the MeeGo project may be the best tip so far.

It may be the best battery powered Embedded Linux project out there at this moment.

And Nokia is usually kind of good at this type of thing.


Update:

One has to be careful about Android since it has a "modified" Linux kernel in the bottom, and some of the things the folks at Google have done do not use baseline/normal Linux kernels. I think that some of their power management ideas could be troublesome to reuse for other projects.

Popularity: 50.0 Answer #3145367, count #1, created: 2010-06-29 22:53:27.0

I haven't actually done this, but I have experience with the two apart (Linux and embedded power management). There are two main Linux distributions that come to mind when thinking about power management, Android and MeeGo. MeeGo uses (as far as I can tell) an unmodified 2.6 kernel with some extras hanging on. I wasn't able to find a lot on exactly what their power management strategy is, although I suspect more will be coming out about it in the near future as the product approaches maturity.

There is much more information available on Android, however. They run a fairly heavily modified 2.6 kernel. You can see a good bit on the different strategies implemented in http://elinux.org/Android_Power_Management (as well as kernel drama). Some other links:

https://groups.google.com/group/android-kernel/browse_thread/thread/ee356c298276ad00/472613d15af746ea?lnk=raot&pli=1

http://www.ok-labs.com/blog/entry/context-switching-in-context/

I'm sure that you can find more links of this nature. Since both projects are open source, you can grab the kernel code, and probably get further information from people who actually know what they are talking about in forms and groups.

At the driver level, you need to make sure that your drivers can properly handle suspend and shut devices off that are not in use. Most devices aimed at the mobile market offer very fine-grained support to turn individual components off, and to tweak clock settings (remember, power is proportional to clock^2).

Hope this helps.

Answer #3145504, count #2, created: 2010-06-29 23:28:12.0

You can do quite a bit of power-saving without requiring any special support from the OS, assuming you are writing (or at least have the source code for) your application and drivers.

Your drivers need to be able to disable their associated devices and bring them back up without requiring a restart or introducing system instability. If your devices are connected to a PCI/PCIe bus, research which power states they support (D0 - D3) and what your driver needs to do to transition between these low-power modes. If you are selecting hardware devices to use, look for devices that adhere to the PCI Power Management Specification or have similar functionality (such as a sleep mode and a "wake up" interrupt signal).

When your device boots up, every device that has the ability to detect whether it is connected to anything needs to do so. If any ports or buses detect that they are not being used, power them down or put them to sleep. A port running at full power but sitting unused can waste more power than you might think it would. Depending on your particular hardware and use case, it might also be useful to have a background app that monitors device usage, identifies unused/idle resources, and acts appropriately (like a "screen saver" for your hardware).

Your application software should make sure to detect whether hardware devices are powered up before attempting to use them. If you need to access a device that might be placed in a low-power mode, your application needs to be able to handle a potentially lengthy delay in waiting for the device to wake up and respond. Your applications should also be considerate of a device's need to sleep. If you need to send a series of commands to a hardware device, try to buffer them up and send them out all at once instead of spacing them out and requiring multiple wakeup->send->sleep cycles.

Don't be afraid to under-clock your system components slightly. Besides saving power, this can help them run cooler (which requires less power for cooling). I have seen some designs that use a CPU that is more powerful than necessary by a decent margin, which is then under-clocked by as much as 40% (bringing the performance down to the original level but at a fraction of the power cost). Also, don't be afraid to spend power to save power. That is, don't be afraid to use CPU time monitoring hardware devices for opportunities to disable/hibernate them (even if it will cause your CPU to use a bit more power). Most of the time, this tradeoff results in a net power savings.

Title: How to detect backlight is on? Id: 3156454, Count: 36 Tags: Answers: 1 AcceptedAnswer: 3156880 Created: 2010-07-01 09:17:51.0 Body:

My application requires resource and drains a battery when it is working. So, I want to know the way that user is not available on the phone.

Is there a way to use the built in power save mode on the blackberry to where if the screen dims to suspend the application works active and when the user wakes the device back up with a keypress or trackball movement to resume the application works active?

Thanks in advance.

Popularity: 4.0 Answer #3156880, count #1, created: 2010-07-01 10:21:03.0

Is there a way to use the built in power save mode on the blackberry to where if the screen dims to suspend the application works active and when the user wakes the device back up with a keypress or trackball movement to resume the application works active?

This happens automatically, with no code changes required. Unless something is holding a WakeLock, the CPU will turn off sometime after the screen goes dark. The CPU will turn back on when the user presses the power button.

Title: Find out which apps are using Battery Life Id: 3239490, Count: 37 Tags: Answers: 1 AcceptedAnswer: 3239626 Created: 2010-07-13 16:58:19.0 Body:

My devices are having power issues. The battery is not lasting as long as we would like.

There are several components that I could guess are causing the battery issues.

Right now the best way I can see to find the culprit is to go through one by one and disable each of them, then conduct a test (that is about 6 hours long).

To get a true view of what is going on this would mean weeks of testing each part.

Is there a better way? Something that can measure power consumption on a windows mobile device? Maybe show battery drain against CPU cycles or something like that?

Any help is appreciated.

Popularity: 8.0 Answer #3239626, count #1, created: 2010-07-13 17:16:32.0

There's really very little software is going to be able to do to show you what your power consumption is. Some batteries can provide information, but that assumes that you have a device that has one of those batteries and that the driver is making use of that ability. It's also not always terribly accurate, so even if you did have a device that had that battery and that had a driver that supported it, I'm still not sure I'd trust it.

Personally I'd solder some wires into the battery terminals and put in a meter to look at the amperage draw. I'd then start and stop different apps and see what's going on.

A different tack would be to run something like Kernel Tracker (comes with the eval version of Platform Builder. It can show you every thread scheduled, and from that you can try to deduce which app is getting a lot of processor time, and thereby using more power. Keep in mind, though, that things like radios and the backlight probably draw way more power than the processor.

Title: Are either the IPad or IPhone capable of OpenCL? Id: 3258257, Count: 38 Tags: Answers: 4 AcceptedAnswer: 3259121 Created: 2010-07-15 17:35:01.0 Body:

With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.

Popularity: 161.0 Answer #3258320, count #1, created: 2010-07-15 17:45:11.0

OpenCL ? No yet. A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory. If you see there what you are looking for, then there are chances. If not, then wait for the next release and look again in the Private stuff. I guess CoreImage is coming first because OpenCL is too low level ;) Anyway, this is just a guess

Answer #3259121, count #2, created: 2010-07-15 19:16:16.0

OpenCL is not yet part of iOS.

However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.

Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.

Answer #5182480, count #3, created: 2011-03-03 15:03:13.0

http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/

iPad2's GPU, PowerVR SGX543 is capable of OpenCL.

Let's wait and see which iOS release will bring OpenCL APIs to us.:)

Answer #7559053, count #4, created: 2011-09-26 17:45:52.0

Following from nacho4d:

There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.

Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).

Title: hybrid algorithms in energy efficiency for wireless lan Id: 3412026, Count: 39 Tags: Answers: null AcceptedAnswer: null Created: 2010-08-05 05:38:44.0 Body:

I looking for hybrid algorithms in energy efficiency for wireless lan. I look an example of this code in c ,cpp, matlab or any other language that I can easily use it.

I already write RSA and ECC algoritm ,but I dont have any idea about how to implement hybrid algoritm,would you help me in this problem?

RSA = Rivest Shamir Adelman ECC = Elliptic Curve Cryptography

Popularity: 3.0 Title: Using Java NIO for pipelined Http Id: 3538110, Count: 40 Tags: Answers: 2 AcceptedAnswer: 8643075 Created: 2010-08-21 15:54:29.0 Body:

Researching the web, I've found that pipelined HTTP is much faster and more power efficient (specially for mobile devices) than queued or parallel connections. The support from general libraries however seams to be small. Just recently has the widespread Apache HttpCore project gained support through its NIO module.

At least it says so on Wikipedia and a few places in the documentation. My problem is, that I have been unable to find any examples or tutorials on how to use this for sending simple piped requests. Neither the HttpCore NIO docs, nor Google codesearch has given me anything looking like Http pipelining.

Can you give me a simple example on how to use this module for sending two gets in a pipe and responding two their answer?

Popularity: 20.0 Answer #5599722, count #1, created: 2011-04-08 19:09:36.0

I'm taking a serious look at this right now:

http://www.java2s.com/Code/Java/Network-Protocol/HttpgetwithCharBufferandByteBuffer.htm

Answer #8643075, count #2, created: 2011-12-27 09:35:06.0

I would wait till Android gets a proper implementation built in. If Google hasn't bothered using it, it might just not be worth all the trouble.

Title: Power efficient and Speed efficient architecture for Multimedia Applications Id: 3625568, Count: 41 Tags: Answers: 1 AcceptedAnswer: 4166229 Created: 2010-09-02 08:59:26.0 Body:

I am working on evaluating a embedded processor architecture which offers below features:

  • 8 SIMD co-processing DSP kind of cores,
  • Each core can do a 8 way SIMD
  • Each core is a 8 execution slot VLIW as well.

I want to use high video encoder(H.264, 1080p, 60fps) or 3D Video encoder to run on this processor/hardware. I am trying to perform architectural exploration and find

  • What are the good featrues of a processor should have which help in carrying out multimedia(Video/Image) Signal processing applications in power/cycle/memory efficient way.

  • What peripherals,memory structure, either cache memory or internal memory;additional assembly instructions help in efficient execution of code for multimedia applications

  • What are most power efficient and fast processor architectures for Multimedia(Video/Image) processing applications

PS: It has to be low power as it is for portable applications.

Any pointers(papers/blogs) would be helpful.

thank you.

-AD.

Popularity: 5.0 Answer #4166229, count #1, created: 2010-11-12 15:36:43.0

I think that "most power efficient and fast processor architectures for Multimedia(Video/Image) processing" is a special hardware cores to do a specific video/image encoding operation. E.g. the fastest mpeg4 avc encoder will be a hardware encoder, isnt it?

Title: Collection with 10 elements (only) Id: 3632521, Count: 42 Tags: Answers: 2 AcceptedAnswer: 3637089 Created: 2010-09-03 01:44:13.0 Body:

I would like to make a score list with only 10 elements. Basically, simple collection which adds the value. If new value is higher than other one it adds one and last one get out. (everyone knows how it looks like :) )

Secondly, I need a list of 5 lastest values that have been changed, something like history panel.

All in all, both are very similar - a list with limited items.

Is there a neat pattern for these? Some cool snippet? I need to use Silverlight for WP7 and the low power consumption solution would be great. Should I make my own collecion? Derive from one or implement interface. Thx in advance.

Popularity: 4.0 Answer #3632617, count #1, created: 2010-09-03 02:05:37.0

I think System.Collections.Generic.Queue<T> is exactly what you want.

Answer #3637089, count #2, created: 2010-09-03 15:12:14.0

I did something like this to limit it to 15. It seems to be no OrderBy in WP7 :/

 public void SaveScore(ScoreInfo scoreInfo) { var listOfScoreInfo = this.GetListOrDefault<ScoreInfo>(App.SCORE); bool isAdd = true; foreach (var info in listOfScoreInfo) { if (info.Name == scoreInfo.Name && info.Score == scoreInfo.Score) isAdd = false; } if(isAdd) listOfScoreInfo.Add(scoreInfo); listOfScoreInfo.Sort(scoreInfo.Compare); if (listOfScoreInfo.Count > 15) { listOfScoreInfo.RemoveAt(15); } this.AddOrUpdateValue(App.SCORE, listOfScoreInfo); this.Save(); } 
Title: Control LED Monitor programmatically Id: 3647400, Count: 43 Tags: Answers: 0 AcceptedAnswer: null Created: 2010-09-05 19:18:53.0 Body:

I am developing a research that basically control LED monitor's backlit to minimize its power consumption. Currently I'm unable to find a LED monitor with provided API to access and control the backlit (some PWD related I think.) Anyone did this before please give me some helps. Thanks a lot.

Popularity: 4.0 Title: How to measure power consumed by my algorithm? Id: 3655806, Count: 44 Tags: Answers: 2 AcceptedAnswer: 3656105 Created: 2010-09-07 04:34:55.0 Body:

I have an image processing algorithm running on an ARM-Cortex-A8/Ubuntu 9.01 platform and I have to measure the power consumed by my algorithm, does anyone know how to do this? Any tools available for this?

Thanks

Popularity: 8.0 Answer #3656105, count #1, created: 2010-09-07 05:56:28.0

Strictly speaking your algorithm doesn't consume power.

Presumably you have some hardware which can accurately measure the power usage of the device, so you should just be able to repeatedly run your code (on an otherwise idle device) on various test data sets and measure the cumulative power usage, and compare that with the idle power consumption of the device over the same time; the difference would be the amount of additional juice the device used running your code.

Like any kind of benchmark, you'll need to run it repeatedly in a loop to get accurate data.

As the data may change its performance characteristics, you'll need a corpus of different test data to simulate different use-cases. Talk to your QA team about it.

Answer #18175969, count #2, created: 2013-08-11 19:42:47.0

I think u can try POWERTOP and POWERSTAT and measure once while system is idle and once while running your program and difference might give you the necessary information.

http://www.hecticgeek.com/2012/02/powerstat-power-calculator-ubuntu-linux/

Thanks--

S Teja

Title: GPS LocationListener and phone sleeping Id: 3752347, Count: 45 Tags: Answers: 3 AcceptedAnswer: null Created: 2010-09-20 14:21:13.0 Body:

I've created service which has LocationListener in it. In order to keep service running the service is set as foreground. I have some questions about phone power management and sleeping in that circumstances:

  1. Will phone go to sleep while such service is running?
  2. How can I save power in this stuation?

Thanks in advance!

Popularity: 12.0 Answer #3756443, count #1, created: 2010-09-21 00:06:16.0

Will phone go to sleep while such service is running?

Yes.

Answer #3756536, count #2, created: 2010-09-21 00:38:24.0

How about a background Service that periodically gets launched using AlarmManager and goes back to sleep after persisting coordinates to a database or file?

Answer #8921216, count #3, created: 2012-01-19 04:58:44.0

If you don't want the service to sleep then you can keep the device awake.

Snippet:

private PowerManager.WakeLock wakeLock; //member variable 

somewhere in your service class:

PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE); wakeLock = pm.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "some_tag"); 

When you're done running then you can do:

wakeLock.release(); 
Title: How to find out status of a usb device using c#? Id: 3862990, Count: 46 Tags: Answers: 2 AcceptedAnswer: null Created: 2010-10-05 10:57:34.0 Body:

i am creating a project related to power consumed by our computer systems. now my requirement is , i need to find out power consumed by a usb device(like pendrive) in the system. can anybody plz help me.

Popularity: 7.0 Answer #3863583, count #1, created: 2010-10-05 12:27:55.0

I don't think you can because the USB specification seems not to mention anything about delivering those measurements. The only information I know you can get is if the device is a low-power or high-power device. For USB 2.0 that is 100mA or 500mA with a voltage between 4.4 and 5.25V. So a low-power power device may consume from almost 0 to 5.25*0.1=0,525W and a high-power up to 5.25*0.5=2.625W.

Unfortunately the WMI classes doesn't seem to give you even that information but that could be just me looking at the wrong places.

//using System.Management var USBDevices = new ManagementObjectSearcher(@"Select * From Win32_USBControllerDevice"); foreach (var device in USBDevices.Get()) { foreach (var prop in device.Properties) { Console.WriteLine(prop.Name + " : " + prop.Value); } } 
Answer #3869624, count #2, created: 2010-10-06 05:08:13.0

You might have a look at http://www.sharpdevelop.net/OpenSource/SharpUSBLib/default.aspx An USB library by the creators of SharpDevelop (at least on the same server)

Title: Writing "Power" Efficient Code Id: 3866746, Count: 47 Tags: Answers: 4 AcceptedAnswer: 3866871 Created: 2010-10-05 18:48:06.0 Body:

Possible Duplicate:
Power Efficient Software Coding

Adobe announced at Google I/O that it's next version of Flash 10.1 is going to more efficient for devices where power consumption matters.

This got me to thinking: how do you write code that uses less power? Are there any helpful resources regarding this topic?

My guess would be that it is a combination of:

  • reducing the complexity of your application
  • writing efficient code that is executed quickly (presumably because processing time = power consumed)
Popularity: 11.0 Answer #3866772, count #1, created: 2010-10-05 18:51:50.0

Seeing as though this is probably aimed towards embedded devices, I would venture to say that the best way to save power is to not be on, and to minimize how long the device is on. This means putting the processor to sleep and waking up only when work needs to be done. The best way I can think of to do this would to make an application entirely interrupt-driven.

Answer #3866789, count #2, created: 2010-10-05 18:54:02.0

In addition to Kevin's suggestion, I would think that minimizing Internet communications would help. This would include fetching data in bulk so more time can be spent asleep.

Answer #3866801, count #3, created: 2010-10-05 18:56:03.0

Also keep in mind that accessing devices like drives and wifi increases power consumption. Try to minimize access to such devices.

Answer #3866871, count #4, created: 2010-10-05 19:06:56.0

There's actually one much bigger way to reduce power consumption that hasn't been touched on.

Let's take a computer and divide all functions into two basic groups. Those implemented in hardware and those implemented in software.

If a function is implemented in hardware (that is- there is circuitry for which you can put the inputs on one set of wires and the outputs come out another set of wires) then the power consumption is equal to the power consumed in the total number of gates. The clock ticks one time (draining a little power) and the bus goes hot for the output (draining a little power).

If a function is implemented in software (that is- there is no single circuit which is used to implement the function) then it requires the use of multiple circuits, multiple clock cycles, and often-times lots of memory calls. Keep in mind that SRAM (used for processor registers) is made of D flip-flops which are constantly draining power so long as they are in use.

As a simple example, let's look at the H.264 encoder. H.264 is a video encoding used by QuickTime videos. It's also used in MPEG videos, many AVIs, and it's used by Skype. Because it's so common someone sat down and found a way to make a chip in hardware to which you feed the encoded file on one end and the red, green, and blue video channels come out the other end.

Before this chip existed (and before Flash 10.1) you had to decode this using software. Decoding it involves lots of sines and cosines. Sine and cosine are transcendental functions (that is- there is no way to write them in the four basic math operations without an infinite series). This means that the best you could do what run a loop 32-64 times getting gradually more accurate, with each iteration of the loop adding, multiplying, and dividing. Each iteration of the loop also moves values in and out of registers (which- as you recall, uses power).

Flash used to decode video by mathematically decoding it in software. Now it just says "pass the video to the H.264 chip". Of course it also has to check for the existence of this chip and use software if it doesn't exist. This means Flash, as a whole, is now larger. But one any system (like HTC phones) with an H.264 chip, it now uses less power.

Apply this same logic for:

  • Multiplying (adding multiple times in software)
  • Modulus (an infinite series in software)
  • Comparing (subtracting and checking if negative in software)
  • Drawing (sines/cosines/nastiness in software. Easy to pass to a videocard)
Title: ALSA: Power Saving Guidelines Id: 3971494, Count: 48 Tags: Answers: 2 AcceptedAnswer: 4137119 Created: 2010-10-19 18:33:22.0 Body:

Does anyone know of a set of power-saving guidelines for ALSA anywhere? For example...

  • What is the best state to put the PCM stream in when not sound is being played?
  • Is there anything that can be disabled in the lib that would save power?
  • What NOT to do?
Popularity: 4.0 Answer #3976989, count #1, created: 2010-10-20 10:49:42.0

If you are not playing sound, you should not be drawing significant power. The good question to ask yourself is : can I measure my power consumption. Premature optimisation is the root of all evil is also true when it comes to power consumption.

If you can't measure it, then you will probably optimize the wrong thing.

What you should aim for is looking for anything that will keep your processor awake. Are you looping for some variable to be set, or are you waiting for an interrupt ?

How often do you write to your sound device ? Can you increase the buffers and reduce the number of wite ? Is your hardware automatically taking care of this ?

Answer #4137119, count #2, created: 2010-11-09 18:26:00.0

There do not appear to be any guidelines anywhere.

Title: Can I completely disable a PCI-slot in Linux? Id: 4117465, Count: 49 Tags: Answers: 1 AcceptedAnswer: 4136229 Created: 2010-11-07 11:19:37.0 Body:

Like many of you, I like to play a game every once in a while. Since I use Linux for my daily tasks (programming, writing papers, browsing, etc.), I solely exploit my graphics-card capabilities in Windows while gaming.

Lately I noticed my energy-bills got really high and I would like to reduce the energy consumption of my computer. My graphics-card uses 110 watt idle, whereas a low-end Radeon HD5xxx only uses 5 watt. I think my computer is powered on 40 hours a week, whereof only 3 hours of gaming. This means I waste 202 kWh a year (!).

I figured I could just buy a DVI splitter and a low-end Radeon-card, and disable the PCI-slot of the high-end card in Linux. I Googled a bit, but I'm not sure which search-terms to use, so I haven't found anything useful.

Too long, didn't read: Is it possible to cut of the power of a PCI-slot using Linux?

Popularity: 12.0 Answer #4136229, count #1, created: 2010-11-09 16:50:34.0

No.

What your asking isn't even a "Linux" question, but a motherboard question - is it electrically possible to do this.

The answer is still no.

The only chance you would have, would be to get the spec of the chip/card which is in the slot, and see if there is a bit you can set on it which would "disable" it, or put it into some "low power mode".

Title: Android power save mode listener? Id: 4139149, Count: 50 Tags: Answers: 1 AcceptedAnswer: 4139593 Created: 2010-11-09 22:04:19.0 Body:

How can an Android listener be created to perform a task just before entering power save mode? Also: what are some of the low power options that can be controlled by this task?

Popularity: 9.0 Answer #4139593, count #1, created: 2010-11-09 22:58:43.0

How can an Android listener be created to perform a task just before entering power save mode?

There is no broadcast Intent for this. The closest is ACTION_SCREEN_OFF. The device will likely fall asleep in the near future after you receive this broadcast. And, you can only listen for this broadcast using a BroadcastReceiver registered via registerReceiver() in an activity or service or other Context.

Also: what are some of the low power options that can be controlled by this task?

I have no idea what this means, sorry.

Title: What is the fastest race free method for polling a lockless queue? Id: 4235721, Count: 51 Tags: Answers: 1 AcceptedAnswer: null Created: 2010-11-21 00:12:00.0 Body:

Say we have a single-producer-thread single-consumer-thread lockless queue, and that the producer may go long periods without producing any data. It would be beneficial to let the consumer thread sleep when there is nothing in the queue (for the power savings and freeing up the CPU for other processes/threads). If the queue were not lockless, the straightforward way to solve this problem is to have the producing thread lock a mutex, do its work, signal a condition variable and unlock, and for the reading thread to lock the mutex, wait on the condition variable, do its reading, then unlock. But if we're using a lockless queue, using a mutex the exact same way would eliminate the performance we gain from using a lockless queue in the first place.

The naive solution is to have the producer after each insertion into the queue lock the mutex, signal the condition variable, then unlock the mutex, keeping the actual work (the insertion into the queue) completely outside the lock, and to have the consumer do the same, locking the mutex, waiting on the condition variable, unlocking it, pulling everything off the queue, then repeat, keeping the reading of the queue outside the lock. There's a race condition here though: between the reader pulling off the queue and going to sleep, the producer may have inserted an item into the queue. Now the reader will go to sleep, and may stay so indefinitely until the producer inserts another item and signals the condition variable again. This means you can occasionally end up with particular items seeming to take a very long time to travel through the queue. If your queue is always constantly active this may not be a problem, but if it were always active you could probably forget the condition variable entirely.

AFAICT the solution is for the producer to behave the same as if it were working with a regular needs-locking queue. It should lock the mutex, insert into the lockless queue, signal the condition variable, unlock. However, the consumer should behave differently. When it wakes, it should unlock the mutex immediately instead of waiting until it's read the queue. Then it should pull as much of the queue as it can and process it. Finally, only when the consumer is thinking of going to sleep, should it lock the mutex, check if there's any data, then if so unlock and process it or if not then wait on the condition variable. This way the mutex is contended less often than it would be with a lockfull queue, but there's no risk of going to sleep with data still left on the queue.

Is this the best way to do it? Are there alternatives?

Note: By 'fastest' I really mean 'fastest without dedicating a core to checking the queue over and over,' but that wouldn't fit in the title ;p

One alternative: Go with the naive solution, but have the consumer wait on the condition variable with a timeout corresponding to the maximum latency you are willing to tolerate for an item traveling through the queue. If the desired timeout is fairly short though, it may be below the minimum wait time for your OS or still consume too much CPU.

Popularity: 21.0 Answer #4403408, count #1, created: 2010-12-09 21:41:17.0

I'm assuming you're using the lock-free single-producer single-consumer queue from the Dr Dobbs article - or something similar - so I'll use the terminology from there.

In that case, your suggested answer in the paragraph that starts "AFAICT" is good, but I think it can be optimised slightly:

  • In the consumer - as you say, when the consumer has exhausted the queue and is considering sleeping (and only then), it locks the mutex, checks the queue again, and then either
    • releases the mutex and carries on working, if there was a new item in the queue
    • or blocks on the condition variable (releasing the mutex when it awakes to find a non-empty queue, naturally).
  • In the producer:
    • First take a copy of last, call it saved_last
    • Add the item new_item as usual, then take a copy of the divider pointer, call it saved_divider.
    • If the value of saved_divider is equal to new_item, the object you just inserted, then your object has already been consumed, and your work is done.
    • Otherwise, if the value of saved_divider is not equal to saved_last, then you don't need to wake up the consumer. This is because:
      • At a time strictly after you added your new object, you know that divider had not yet reached either new_item or saved_last
      • Since you started the insertion, last has only had those two values
      • The consumer only ever stops when divider is equal to last
      • Therefore the consumer must still be awake and will reach your new item before sleeping.
    • Otherwise lock the mutex, signal the condvar then release the mutex. (Obtaining the mutex here ensures you don't signal the condar in the time between the consumer noticing the queue is empty, and actually blocking on the condvar.)

This ensures that, in the case where the consumer tends to remain busy, you avoid locking the mutex when you know the consumer is still awake (and not about to sleep). It also minimises the time when the mutex is held, to further reduce the possibility for contention.

The above explanation is quite wordy (because I wanted to include the explanation of why it works, rather than just what the algorithm is), but the code resulting from it should be quite simple.

Of course whether it's actually worth doing will depend on a lot of things, and I'd encourage you to measure if performance is critical for you. Good implementations of the mutex/condvar primitives internally use atomic operations for most cases, so they generally only make a kernel call (the most expensive bit!) if there's a need to - i.e. if there's a need to block, or there are definitely some threads waiting - so the time saved by not calling the mutex functions may only amount to the overhead of the library calls.

Title: What kind of bugs have you fixed due to energy inefficiency issues when you develop mobile apps Id: 4361967, Count: 52 Tags: Answers: 5 AcceptedAnswer: 4362448 Created: 2010-12-05 23:27:11.0 Body:

For those who write applications for mobile phones, what kind of bugs/problems have you fixed in order to improve energy efficiency, and how much the fix improves?

A follow-up question: is energy efficiency considered as important as features and avoiding functionality bugs when you write mobile apps?

Popularity: 10.0 Answer #4361987, count #1, created: 2010-12-05 23:31:36.0

Energy efficiency in mobile dev is tantamount to memory constraints in embedded systems.

Specifically, I like GPS apps and so make sure that the GPS is only on for the bare minimum of time. Of course, when there are bugs that are introduced that keep the GPS turned on too long they go to the top of the list to get fixed.

So, the short answer is: Yes, energy efficiency is definitely as important as features.

Answer #4361997, count #2, created: 2010-12-05 23:33:25.0

EE is important especially if the application is running constantly in the background.

We had to replace polling methods with event based methods whenever possible. If it was not possible we reduced the polling frequency.

Also reducing file read/writes to minimum reduces battery consumption considerably.

Answer #4362206, count #3, created: 2010-12-06 00:21:41.0
  1. Process images + calculations on the server for low cpu phones rather than using the phones cpu (not as applicable on iPhone + Android handsets)
  2. Draw to the screen only when necessary rather than endlessly
  3. Save state at all times so the user can enter the application where they left off if an interrupt causes your application to be placed into the background
  4. Avoid running in the background where ever possible? do you really need to or can wait until the application has focus
  5. Avoid using fine grain location where a coarse location would do (GPS vs cellular location)
  6. Use push over pull where ever it is possible to save polling the network
Answer #4362413, count #4, created: 2010-12-06 01:12:20.0

In my opengl based live wallpapers battery life is a significant issue.

Keep sensor use to a minimum, there is lots of different profiles, use the delay that you require.

To maximize battery in a LWP I usually force a frame delay of 5ms by default. This appears to be enough time to let the CPU relax between frames and keep the usage reasonably low. You can also manage the timeout based on the current FPS and pin it to a FPS profile. E.g. the device could render 60fps, but you are just rendering at 30fps and sleeping half the time.

For games you could do the same, just put a fps limit in your engine and don't let it go above that.

If you want to be hardcore, realize that OLED's used in many android devices use more power to display light colors as opposed to dark ones. On a LCD there is a equal backlight, but on OLED a black pixel is effectively off and not using power. So the darker your screen, the longer your battery life should last. Something to consider in certain situations if you want to be really hardcore on the battery side of things.

Don't use the GPS, don't use 3G, and if you do cache everything locally.

Answer #4362448, count #5, created: 2010-12-06 01:19:58.0

To answer the follow-up question first, very few customers notice any difference in energy efficiency or battery life from using a particular app. This is almost never mentioned in the App store reviews. I write power efficient code mostly because I don't want to run down my own device's batteries while testing and using my apps.

Some suggestions for iPhone apps:

  1. Write your app so that it runs well on the slowest device (iPhone 2G or 3G) with the slowest OS (4.x on a 3G). Then it can mostly be idle on the much faster current devices.

  2. In graphics routines, try not to redraw anything already drawn. Use a small CALayer or sub view for localized graphics updates/changes.

  3. Use async methods as much as possible so that your app isn't even running on the CPU most of the time.

  4. Use plain C data structures (instead of Foundation objects) and pack them so that your app's working set can stay completely resident in the very limited ARM CPU data cache, if possible.

  5. Don't do networking any more than necessary. Do the largest data transfers possible at one time so that the radios can turn off longer between your app's network use, instead of lots of continuous small transfers.

Title: A background sensor data collector in Android Id: 4416141, Count: 53 Tags: Answers: 2 AcceptedAnswer: null Created: 2010-12-11 09:53:09.0 Body:

I am now programming a program that collects sensor data, e.g. acclerometer values for a whole day.

Current I just use an Activity and run the activity for a whole day (I turn off screen auto-black), and don't make any shortmessages or phone calls during the day.

I've heard I can make this kind of long running data collector in background using Service. But after I've checked the pedometer at http://code.google.com/p/pedometer/. I found that, when the screen blacks out, the pedometer does not work. (But An application like predometer should work in any case as long as the power is on.)

Although I don't care about the power problem of always sensing acclerometers, I do want to black out the screen to save the power in screen to record more acclerometer data points.

I am thinking about two ways:

1.Using an Service, however, as the pedometer application showed. When the screen blacks out, the service seems stoped working too! Maybe the code has bugs.

2.My application is still an Activity. But I change the scrren light into 0 or totally black to save power.

My question is that: for 1) does a Service have the abality to be always running even when the screen blacks out for a long time; For 2, how to change the screen light?

Thanks!

Popularity: 35.0 Answer #4416626, count #1, created: 2010-12-11 12:12:07.0

concerning 1 - what you need is a remote service. this is a service nearly similar to a 'local' service (that is used in the pedometer example) but can run even if no activity is bound to it, in the background. you can turn off the screen and even the activity can crash (in a bad case ;) ) but the service keeps running if you started it with startService(...) instead of bindService(...).

try getting through this and see if that helps.

concerning 2 - you should really use (1) ;)

Answer #12495580, count #2, created: 2012-09-19 13:18:28.0

You do not need a remote service - this can be done with a local Service.

Use the command pattern instead of the binding pattern for the service i.e. use startService() / stopService() rather than bind() / unbind().

In your service onStartCommand() method, return Service.START_REDELIVER_INTENT or something similar, so that the service lives longer than the Activity.

Now the trick: to keep the service processing properly when the phone goes to sleep, you need to work with a PowerManager.WakeLock. These are quite difficult to get right, and I don't think a full explanation is needed in this answer.

Here is some more info:


Apologies for the summary answer, but your question is quite broad in terms that it touches on some advanced topics.

Title: Estimating process energy usage on PCs (x86) Id: 4485153, Count: 54 Tags: Answers: 4 AcceptedAnswer: 5601187 Created: 2010-12-19 21:17:47.0 Body:

I'm trying to come up with a heuristic to estimate how much energy (say, in Joules) a process or a thread has consumed between two time points. This is on a PC (Linux/x86), not mobile, so the statistics will be used to compare the relative energy efficiency of computations that take similar wall-clock time.

The idea is to collect or sample hardware statistics such as cycle counter, p/c states or dynamic frequency, bus accesses, etc., and come up with a reasonable formula for energy usage between measurements. What I'm asking is whether this possible, and what this formula might look like.

Some challenges that come to mind: 1) Properly accounting for context switches to other processes (or threads).

2) Properly accounting for the energy used outside the CPU. If we assume negligible I/O, that means mostly RAM. How does allocation amount and/or access pattern affect energy usage? (That is, assuming I have a way to measure dynamic memory allocation to begin with, e.g., with a modified allocator.)

3) Using CPU time as an estimate is limited to coarse-grain and oft-wrong accounting, CPU energy usage only, and assumes fixed clock frequencies. It includes, but doesn't account well for, time spent waiting on RAM.

Popularity: 42.0 Answer #4485356, count #1, created: 2010-12-19 21:59:10.0

On Linux, try the PowerTOP utility. However, rather than computing absolute values in Joules, it focuses on relative power usage between various system components.

Answer #4485555, count #2, created: 2010-12-19 22:46:30.0

Intel's Energy Efficient Software Guidelines has a host of useful info, including a link to their own Application Energy Toolkit. which includes...

2) Application Energy Graphing Tool

The Application Energy Graphing Tool is an interactive tool that can measure the battery power consumption of an application over time, and log and graph the resulting data.

Application developers can use the Application Energy Graphing Tool to help them design applications that conserve battery power on mobile computer systems.

Answer #5599998, count #3, created: 2011-04-08 19:37:57.0

This is the topic of ongoing research. So don't expect any definite answers. Some publications you might find interesting are for example:

  • Chunling Hu, Daniel A. Jiménez and Ulrich Kremer, Efficient Program Power Behavior Characterization, Proceedings of the 2007 International Conference on High Performance Embedded Architectures & Compilers (HiPEAC-2007), pp. 183--197, January 2007. (pdf)

  • Adam Lewis, Soumik Ghosh, and N.-F. Tzeng, Run-time Energy Consumption Estimation Based on Workload in Server Systems, USENIX 2008, Workshop on Power Aware Computing and Systems (html pdf)

But you can easily find many more using Google Scholar and Citeseer.

Answer #5601187, count #4, created: 2011-04-08 21:57:03.0

You may be able to get a figure for the power consumption of your process, but it will only be correct in isolation. For example, if you ran two processes in parallel, you're unlikely to fit a straight line with good accuracy.

This is hard enough to do on embedded platforms with a complete break-out of every voltage rail, let alone on a PC where your one data point is the wattage from the outlet. Things you'll need to measure and bear in mind:

  • Base load ain't so base. A system idle for many seconds will be in a deeper sleep state than one which isn't. Do you measure 'deep' sleep or just idle? How do you know which you're measuring?
  • Load isn't always linear. Variable voltage: some components shift voltage up/down depending on load and frequency. Temperature: can go either way these days (not just thermal runaway).
  • Power supplies aren't the same efficiency at all loads. If you're measuring outlet wattage, you need to bear this in mind. For example, it could be 50% efficient below 100W, 90% from 100-300W and down to 80% 300W+.
  • Additional processes won't necessarily add linearly. For example, once DDR is out of idle, its base load increases, but additional processes won't make that any worse. This is even more unpredictable with multiple cores and variable frequencies.

The basic way to measure it is the obvious way: record number of watts in idle, record number of watts in use, subtract. You can try running at 50% duty cycle, 25%, 75% and so on, to draw a pretty graph (linear or otherwise). This will show up any non-linearity. Unfortunately conversion efficiency vs load for both CPU regulator and PSU will be the dominant cause. There's not much you can do to eliminate that without having a development version of the motherboard you're playing with (unlikely), or if you're lucky enough to have a PSU with a graph of efficiency vs load.

However, it's important to realize that these data points are only correct in isolation. You can do a pretty good job of modeling how these things will sum up in the system, but be very aware that it's only a good approximation at best. Think of it as being equivalent to looking at some C code for an audio codec and estimating how fast it'll run. You can get a good general idea, but expect to be wildly inaccurate when measured in reality.

Edit - Expanding a little as the above doesn't really answer how you might go about it.

Measuring power consumption: get yourself an accurate wattage meter. As I mentioned, unless you have a way to break out the individual voltage rails and measure current, the only measurement you can make is at the outlet. Alternatively, if you have access to the health monitoring status on the motherboard, and that has current (amps) reporting (rare), that can give you good accuracy and fast response times.

So, measure base wattage - pick whatever situation you think of as "base". Run your test, and measure "peak". Subtract, done. Yes, that's fairly obvious. If you have something where the difference is so small it's lost in the noise, you can try measuring energy usage over time instead (e.g kWh). Try measuring an hour at idle vs an hour with your process running flat out, and see the total energy difference. Repeat similarly for all types of test you want to perform.

You will get noticeable wattage differences for heavy CPU, DDR and GPU users. You might notice the difference between L1 vs L2 vs DDR constrained algorithms (DDR uses much more power), if you're careful to note that the L1/L2 constrained algorithms are running faster - you need to account for energy used per "task" not continuous power. You probably won't notice hard disk access (it's actually just a watt or two and lost in the noise in a PC) other than the performance hit. One extra data point worth recording is how much "base" load increases if you have a task waking up every 100ms or so, using 1% of CPU. That's basically what non-deep-sleep idle looks like. (This is a hack and 100ms is a guess)

Beware that 1% may be different from 1% at another time, if you have a CPU with frequency changing policies enabled.

One final big note: it's of course energy you should be measuring, just as you titled the question. It's very easy to make the mistake of benchmarking power consumption of one task vs another and to conclude one is more expensive... if you forget about the relative performance of them. This always happens with bad tech journalists benchmarking hard disk vs SSD, for example.

On embedded platforms with current monitoring across many rails, I've done measurements down to nanojoules per instruction. It's still difficult to account for energy usage by thread/process because there's a lot of load that's shared by many tasks, and it can increase/decrease outside of its timeslice. On a PC, I'm not sure you'll manage to get as fine grained as that :)

Title: Advantage and disadvantage of spanning tree with even distance Id: 4575486, Count: 55 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-01-01 19:53:54.0 Body:

It's new year day and still can't solve my problem about a spanning tree algorithm. I can't insert picture yet so I have to try to explain the enviroment with words.

It's 36 nodes and the distance to every nodes is even. The question is if the distance is even, it doesn't matter which way to pass message from node with ID 1 (the root) to the last node with ID 36. Because the distance is even there's no time saving, energy saving or message saving algorithm right? I hope someone understand my question

edited:

  1. Enviroment

    1 - 2 - 3 - 4 - 5 - 6 | | | | | | 7 8 9 10 11 12 | | | | | | 13 14 15 16 17 18 | | | | | | 19 20 21 22 23 24 | | | | | | 25 26 27 28 29 30 | | | | | | 31 32 33 34 35 36 

This is my choice of spanning tree. Node with ID 36 send it information thru 30,24,18,12,6,5,4,3,2,1 (one is the root) and then node 1 send information to the base station. Because it doesn't have any cost it doesn't really matter which path I choose to send the information from node 36 to node 1 because the cost will still be the same.

  1. My Spanning tree Algorithm

    • When start, only the root is marked.
    • The root send search message to it neighbor
    • If a node is not marked, when it recieves search messages from other nodes:
    • it mark itself
    • Select the nodes with lowest ID as a "parent" and reply "non-parent" to the other nodes
    • If the node is already mark, it replies "non-parent"
    • If a node is already marked and recieve a parent message it marks the sender as a child
  2. I can't show you guys the flowchart because I don't have the privilege to insert images.

  3. Pseudo Code (haven't done it)

  4. Conclusion - Here I should write down the advantage and disadvantage of my algorithm, but right now I can't think of any advantage and disadvantage

Popularity: 34.0 Answer #7760751, count #1, created: 2011-10-13 21:40:33.0

By "even" I think you mean "Irregardless of where I start, the distance to move one node horizontally in my diagram is always 1, and the distance to move vertically is always 6."

Your question then sounds like "Do all paths from the upper left to the lower right have the same total length?" If we restrict our attention to paths that, at each step, always move either down or to the right, then the answer is "yes".

To see this, note we need in total to make 5 hops down, and 5 hops to the right. Suppose we pick a path that does so (but not necessarily in that order.) Since all downward hops have the same cost, and all rightward hops have the same cost, we can find the total cost of the path by considering each hop in order, writing a 6 for each downward hop and 1 for each rightward hop, and add the list together.

For example, the cost of path RRDDRDRDDR is 1 + 1 + 6 + 6 + 1 + 6 + 1 + 6 + 6 + 1.

Now we can see something interesting. A different path with 5 down hops and 5 right hops will have the same list of 5 6s and 5 1s, just summed in a different order. We can now observe that addition is commutative, and conclude that these two sums must come out equal. That is, any path moving down and to the right has the same total length (35) as any other.

Given that, your spanning tree is as good as any other, assuming the underlying graph really is a grid.

Title: Does anyone know any Compiler which optimizes code for energy consumption for embedded devices? Id: 4746410, Count: 56 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-01-20 11:17:18.0 Body:

It's a general view that a faster code will consume less power because you can put CPU in idle state for more time but when we talk about energy consumption, is following a possibility:

Suppose there's a instruction sequence which gets executed in 1ms and during the execution process the average current consumption was say 40mA . .and your Vdd is 3.3V

so total energy consumed = V*I*t = 3.3 * 40*10^-3 * 1*10^-3 Joules = 13.2*10^-6 Joules

and in another case there's a instruction sequence which gets executed in 2ms and during execution process the average current consumption is 15mA . .and Vdd is 3.3V

so total energey consumed = V*I*t = 3.3 * 15*10^-3 * 2*10^-3 Joules = 9.9*10^-6 Joules

so the question comes to. .. . Is there any architecture which has different instruction sets for performing the same task with different current consumptions?

And if there are ...then is there any compiler which takes this into account and generates a code which is energy efficient?

Popularity: 19.0 Answer #4747669, count #1, created: 2011-01-20 13:26:20.0

There is none that I know of, but I think this should be possible using a compiler framework like LLVM, by adapting the instruction scheduler's weighting algorithm.

Answer #4748415, count #2, created: 2011-01-20 14:39:30.0

At the individual instruction level, things like shifting rather than multiplying would certainly lower current and therefore energy consumption, but I'm not sure I buy your example of taking twice as long but using half the current (for a given clockrate). Does replacing a multiply with a shift and add, which doubles the time, really take half the current? There's so much other stuff going on in a CPU (just clock distribution across the chip takes current) that I'd think the background current usage dominates.

Lowering the clock rate is probably the single biggest thing you can do to cut power consumption. And doing as much in parallel as you can is the easiest way to lower the clock rate. For instance, using DMA over explicit interrupts allows algorithmic processing to finish in fewer cycles. If your CPU has weird addressing modes or parallel instructions (I'm looking at you, TMS320) I'd be surprised if you couldn't halve the execution time of tight loops for well under double the current, giving a net energy savings. And on the Blackfin family of CPUs, lowering the clock allows you to lower the core voltage, dramatically decreasing power consumption. I imagine this is true on other embedded processors as well.

After clock rate, I bet that power consumption is dominated by external I/O access. In low power environments, things like cache misses hurt you twice - once in speed, once in going to external memory. So loop unrolling, for instance, might make things quite a bit worse, as would doubling the number of instructions you need for that multiply.

All of which is to say, creative system architecture will probably make much more of a power impact than telling the compiler to favor one set of instructions over another. But I have no numbers to back this up, I'd be very curious to see some.

Title: Constantly monitor a sensor in Android Id: 4752277, Count: 57 Tags: Answers: 2 AcceptedAnswer: 4752681 Created: 2011-01-20 20:34:50.0 Body:

I am trying to figure out the best way to monitor the accelerometer sensor with a polling rate of less than .25 milliseconds. I have implemented a UI option for the user to switch to a constant monitoring state and made it clear of the battery drain ramifications. Would a remote service be the best way over a daemon thread because of the way Android handles cleaning up memory and threads? The point is to make the accelerometer monitored as close to constantly as possible, battery drain be damned. And this monitoring needs to be long running, maybe even more than 24 hours straight, again I realize the power consumption consequences. Any suggested reading or code snippets will be appreciated.

Just a newbe looking for advice from the wisdom of the Android community. Thanks in advance,

-Steve

CLARIFICATION: I am trying to detect the instant there is a change in acceleration. My code discriminates by axis, but getting real time data from the accelerometer is my goal.

Popularity: 24.0 Answer #4752640, count #1, created: 2011-01-20 21:12:37.0

Using a specific thread to monitor and wait is the best solution that gives you flexibility on the wait period. This is quite efficient as it does not requires any specific service.

class MonitorThread extends Thread { ... public void run() { for (;;) { long ms = 0; int nanos = 250000; ... // Do something or compute next delay to wait try { Thread.sleep(ms, nanos); } catch (InterruptedException ex) { } } } } 

See http://developer.android.com/reference/java/lang/Thread.html

You specify a very short delay (.250 ms) so this will be CPU intensive. You can probably use the result of the accelerometer to increase or reduce this delay. For example, if you detect that there is no acceleration, increase the delay (it's up to you but 100ms seems reasonable or even higher). As soon you detect something, reduce the delay. All this depends on your application.

Answer #4752681, count #2, created: 2011-01-20 21:16:45.0

We did this using Android Services - they can be started from an activity but remain running in the background. That's probably what you're looking for!
Some Howtos:

  1. http://developerlife.com/tutorials/?p=356
  2. http://developer.android.com/guide/topics/fundamentals.html#procthread
Title: Powering down an ethernet PHY Id: 4847651, Count: 58 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-01-31 04:31:57.0 Body:

I am running embedded linux on an OMAP ARM (OMAP-L138). The ethernet controller on this is connected to an external PHY chip. Everything is working fine, except in some circumstances, I would like to save power and power down the PHY (but not suspend the whole system).

I know Linux can suspend the PHY easily, as when I put the whole system in a suspend to ram state, the PHY does indeed power down.

However, what I want to be able to do is to turn the PHY on and off via a user-space application, turning it on and off as I wish.

How do I achieve this? I am fairly new to linux, and I can write userspace applications in C to open device drivers and access them.

The PHY is connected via a MII interface, but I don't see a mii under /dev/? (i.e. for accessing the i2c driver, I have been doing fd = open( "/dev/i2c-0", O_RDWR );) Where is the mii driver kept? How can I access it? If only I could read and write a few registers to the PHY chip via the mii driver, then I think it would be easily achievable.

Thanks.

Popularity: 9.0 Answer #4891861, count #1, created: 2011-02-03 21:19:46.0

Find the source code in whatever driver is running the PHY (either by looking in the active kernel config, looking in the kernel messages, guessing, or grepping) and read through it.

See if it supports this. See if it supports a way to tell it to. If so, learn to use it.

If not, and you know from data sheets that the hardware supports it, add a mechanism, either as part of an existing power control scheme or just freehanded on its own. A node in sysfs seems to be the currently in vogue generic interface for telling the kernel simple on/off option settings, doing it in /proc the slightly older way.

This is also one of the areas where there is one (or a few) "right" solutions that would be acceptable for getting your code upstream, and a lot of more controversial solutions that you can probably get working for your own purposes quite quickly, especially if they use mechanisms you are already familiar with. It's a judgment call based on the purpose and future of your work.

Title: Is there any battery power consumption benefit if Network Location Provider (vs. GPS) is used? Id: 4926963, Count: 59 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-02-07 21:42:00.0 Body:

Is there any real battery power consumption benefit if Network Location Provider (vs. GPS) is used?

I believe there should be some benefit, but since I develop on emulator I can not prove my assumption. Does anyone have the evidence that Network Location Provider consumes less battery power than GPS Location Provider? If yes, could you tell how significant it is?

Thanks!

Popularity: 25.0 Answer #4927117, count #1, created: 2011-02-07 21:58:20.0

The difference is likely extremely significant. GPS hardware uses—relative to other components of the phone—a lot of power, which is why phones often get warm when their GPS is active. The network location service, on the other hand, just relies on the cell towers to which the phone is connected anyway (and possibly local wifi networks, though I'm not sure about that), and thus should cause little if any extra power consumption.

Answer #6693668, count #2, created: 2011-07-14 13:10:49.0

I have experienced significant battery drain while having GPS location turned on vs having it turned off on my Android phone. I always leave it off, except when I actually need it for traveling as a GPS. But then my phone won't last all day, like it will without using GPS.

Title: get instant energy consumption Id: 4946600, Count: 60 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-02-09 15:03:00.0 Body:

I am looking to get instant energy consumption, in shell or C++

any ideas ?

Thanks

Popularity: 5.0 Answer #4946730, count #1, created: 2011-02-09 15:12:37.0

Your question could do with a bit more detail, but if I understand you correctly a program named Joulemeter does this the following way:

Joulemeter estimates the energy usage of a VM, computer, or software by measuring the hardware resources (CPU, disk, memory, screen etc) being used and converting the resource usage to actual power usage based on automatically learned realistic power models.

That is one way to go. If you're just doing this for your own project, I guess you could throw together some hardware that measured from the wall socket and gave you the data that way. Maybe something like that exists already.

Answer #4946947, count #2, created: 2011-02-09 15:31:48.0

Well, if you have a Laptop you could use the answer presented for this similar question:

/usr/sbin/system_profiler SPPowerDataType | grep Wattage 
Title: iphone - how to disable network power saving mode Id: 5006693, Count: 61 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-02-15 16:50:57.0 Body:

Whilst making a Game Center GKMatch game I've come to realise the iphone networking hardware will go into power save mode if I don't transmit/receive anything within 100ms (or something like that). Coming out of power save mode can take 500ms, which is bad. To prevent this from happening I just send something periodically. But I don't REALLY want to do that because it uses up people's monthly 3G data usage.

Is there some way to programmatically disable power saving for networking?

Popularity: 12.0 Answer #5006728, count #1, created: 2011-02-15 16:53:37.0

I haven't tried this, but can you send something to localhost (127.0.0.1) periodically, to keep the connection up, without going over WWAN or wi-fi?

Title: What is the lowest power consume way to print over bluetooth on Android? Id: 5134257, Count: 62 Tags: Answers: 1 AcceptedAnswer: 5134996 Created: 2011-02-27 16:13:03.0 Body:

I'm developing an app that requires to print over Bluetooth to a Bixolon thermal printer.I have already got it, so if you need help to print over Bluetooth, I would be happy to help you.

The Dilemma: The final user for this app will print an invoice every 3-5 minutes for a period of 4-5 hours daily so I need the lowest power consume possible.

I'm stocked in this two possible ways:

  • Connect to the printer every time the user will need to used it. It will take around 1.5-2 seconds to print having the BT id saved on the database.

  • Connect to the device just one time and leave the connection open at all time.

What is the best way?

I appreciate your help thank you! :)

Popularity: 2.0 Answer #5134996, count #1, created: 2011-02-27 18:23:59.0

I think you should not worry so much about the power consumption, because Bluetooth was design to be a low power replacement for classic cables. But you should read this article: Bluetooth and power consumption: issues and answers, I think it's relevant for your case.

The first approach you mention I think it is not good, because it will be first of all frustrating for the user, to wait all the time to re-authenticate. Second it will consume more energy because you have the reconnection overhead.

The second approach is more economic.

Title: How can I sense if a phone is in standby or sleep mode(dor Nokia)? Id: 5151872, Count: 63 Tags: Answers: 2 AcceptedAnswer: 5177400 Created: 2011-03-01 08:08:47.0 Body:

I need to sense a phone's status. If it is sleep or standby mode I have to change my app's running code (to save power consumption). Is this supported by the JavaME API? ( for Nokias)

Popularity: 6.0 Answer #5151951, count #1, created: 2011-03-01 08:19:07.0

From what I remember, MIDlet.pauseApp() gets called when the phone enters sleep/standby mode. Not sure about that, but it shouldn't be hard to check if this works.

Answer #5177400, count #2, created: 2011-03-03 06:23:40.0

AFAIK Its difficult to achieve with j2me. But see this existing post. Here AlexR said the one solution. you can try like that. May be it helps you.

Title: Investigation of optimal sleep time calculation in game loop Id: 5274619, Count: 64 Tags: Answers: 5 AcceptedAnswer: 5275254 Created: 2011-03-11 15:11:21.0 Body:

When programming animations and little games I've come to know the incredible importance of Thread.sleep(n); I rely on this method to tell the operating system when my application won't need any CPU, and using this making my program progress in a predictable speed.

My problem is that the JRE uses different methods of implementation of this functionality on different operating systems. On UNIX-based (or influenced) OS:es such as Ubuntu and OS X, the underlying JRE implementation uses a well-functioning and precise system for distributing CPU-time to different applications, and so making my 2D game smooth and lag-free. However, on Windows 7 and older Microsoft systems, the CPU-time distribution seems to work differently, and you usually get back your CPU-time after the given amount of sleep, varying with about 1-2 ms from target sleep. However, you get occasional bursts of extra 10-20 ms of sleep time. This causes my game to lag once every few seconds when this happens. I've noticed this problem exists on most Java games I've tried on Windows, Minecraft being a noticeable example.

Now, I've been looking around on the Internet to find a solution to this problem. I've seen a lot of people using only Thread.yield(); instead of Thread.sleep(n);, which works flawlessly at the cost of the currently used CPU core getting full load, no matter how much CPU your game actually needs. This is not ideal for playing your game on laptops or high energy consumption workstations, and it's an unnecessary trade-off on Macs and Linux systems.

Looking around further I found a commonly used method of correcting sleep time inconsistencies called "spin-sleep", where you only order sleep for 1 ms at a time and check for consistency using the System.nanoTime(); method, which is very accurate even on Microsoft systems. This helps for the normal 1-2 ms of sleep inconsistency, but it won't help against the occasional bursts of +10-20 ms of sleep inconsistency, since this often results in more time spent than one cycle of my loop should take all together.

After tons of looking I found this cryptic article of Andy Malakov, which was very helpful in improving my loop: http://andy-malakov.blogspot.com/2010/06/alternative-to-threadsleep.html

Based on his article I wrote this sleep method:

// Variables for calculating optimal sleep time. In nanoseconds (1s = 10^-9ms). private long timeBefore = 0L; private long timeSleepEnd, timeLeft; // The estimated game update rate. private double timeUpdateRate; // The time one game loop cycle should take in order to reach the max FPS. private long timeLoop; private void sleep() throws InterruptedException { // Skip first game loop cycle. if (timeBefore != 0L) { // Calculate optimal game loop sleep time. timeLeft = timeLoop - (System.nanoTime() - timeBefore); // If all necessary calculations took LESS time than given by the sleepTimeBuffer. Max update rate was reached. if (timeLeft > 0 && isUpdateRateLimited) { // Determine when to stop sleeping. timeSleepEnd = System.nanoTime() + timeLeft; // Sleep, yield or keep the thread busy until there is not time left to sleep. do { if (timeLeft > SLEEP_PRECISION) { Thread.sleep(1); // Sleep for approximately 1 millisecond. } else if (timeLeft > SPIN_YIELD_PRECISION) { Thread.yield(); // Yield the thread. } if (Thread.interrupted()) { throw new InterruptedException(); } timeLeft = timeSleepEnd - System.nanoTime(); } while (timeLeft > 0); } // Save the calculated update rate. timeUpdateRate = 1000000000D / (double) (System.nanoTime() - timeBefore); } // Starting point for time measurement. timeBefore = System.nanoTime(); } 

SLEEP_PRECISION I usually put to about 2 ms, and SPIN_YIELD_PRECISION to about 10 000 ns for best performance on my Windows 7 machine.

After tons of hard work, this is the absolute best I can come up with. So, since I still care about improving the accuracy of this sleep method, and I'm still not satisfied with the performance, I would like to appeal to all of you java game hackers and animators out there for suggestions on a better solution for the Windows platform. Could I use a platform-specific way on Windows to make it better? I don't care about having a little platform specific code in my applications, as long as the majority of the code is OS independent.

I would also like to know if there is anyone who knows about Microsoft and Oracle working out a better implementation of the Thread.sleep(n); method, or what's Oracle's future plans are on improving their environment as the basis of applications requiring high timing accuracy, such as music software and games?

Thank you all for reading my lengthy question/article. I hope some people might find my research helpful!

Popularity: 62.0 Answer #5274692, count #1, created: 2011-03-11 15:16:25.0

Thread.Sleep says you're app needs no more time. This means that in a worst case scenario you'll have to wait for an entire thread slice (40ms or so).

Now in bad cases when a driver or something takes up more time it could be you have to wait for 120ms (3*40ms) so Thread.Sleep is not the way to go. Go another way, like registering a 1ms callback and starting draw code very X callbacks.

(This is on windows, i'd use MultiMedia tools to get those 1ms resolution callbacks)

Answer #5275254, count #2, created: 2011-03-11 16:00:16.0

You could use a cyclic timer associated with a mutex. This is IHMO the most efficient way of doing what you want. But then you should think about skipping frames in case the computer lags (You can do it with another nonblocking mutex in the timer code.)

Edit: Some pseudo-code to clarify

Timer code:

While(true): if acquireIfPossible(mutexSkipRender): release(mutexSkipRender) release(mutexRender) 

Sleep code:

acquire(mutexSkipRender) acquire(mutexRender) release(mutexSkipRender) 

Starting values:

mutexSkipRender = 1 mutexRender = 0 

Edit: corrected initialization values.

The following code work pretty well on windows (loops at exactly 50fps with a precision to the millisecond)

import java.util.Date; import java.util.Timer; import java.util.TimerTask; import java.util.concurrent.Semaphore; public class Main { public static void main(String[] args) throws InterruptedException { final Semaphore mutexRefresh = new Semaphore(0); final Semaphore mutexRefreshing = new Semaphore(1); int refresh = 0; Timer timRefresh = new Timer(); timRefresh.scheduleAtFixedRate(new TimerTask() { @Override public void run() { if(mutexRefreshing.tryAcquire()) { mutexRefreshing.release(); mutexRefresh.release(); } } }, 0, 1000/50); // The timer is started and configured for 50fps Date startDate = new Date(); while(true) { // Refreshing loop mutexRefresh.acquire(); mutexRefreshing.acquire(); // Refresh refresh += 1; if(refresh % 50 == 0) { Date endDate = new Date(); System.out.println(String.valueOf(50.0*1000/(endDate.getTime() - startDate.getTime())) + " fps."); startDate = new Date(); } mutexRefreshing.release(); } } } 
Answer #5275382, count #3, created: 2011-03-11 16:11:15.0

Timing stuff is notoriously bad on windows. This article is a good place to start. Not sure if you care, but also note that there can be worse problems (especially with System.nanoTime) on virtual systems as well (when windows is the guest operating system).

Answer #5278429, count #4, created: 2011-03-11 21:12:29.0

Your options are limited, and they depend on what exactly you want to do. Your code snippet mentions the max FPS, but the max FPS would require that you never sleep at all, so I'm not entirely sure what you intend with that. None of that sleep or yield checking is going to make any difference in most of the problem situations however - if some other app needs to run now and the OS doesn't want to switch back soon, it doesn't matter which one of those you call, you'll get control back when the OS decides to do so, which will almost certainly be more than 1ms in the future. However, the OS can certainly be coaxed into making switches more often - Win32 has the timeBeginPeriod call for precisely this purpose, which you may be able to use somehow. But there is a good reason for not switching too often - it's less efficient.

The best thing to do, although somewhat more complex, is usually to go for a game loop that doesn't require real-time updates, but instead performs logic updates at fixed intervals (eg. 20x a second) and renders whenever possible (perhaps with arbitrary short sleeps to free up CPU for other apps, if not running in full-screen). By buffering a past logic state as well as the current one you can interpolate between them to make the rendering appear as smooth as if you were doing logic updates each time. For more information on this approach, you can see the Fix Your Timestep article.

I would also like to know if there is anyone who knows about Microsoft and Oracle working out a better implementation of the Thread.sleep(n); method, or what's Oracle's future plans are on improving their environment as the basis of applications requiring high timing accuracy, such as music software and games?

No, this won't be happening. Remember, sleep is just a method saying how long you want your program to be asleep for. It is not a specification for when it will or should wake up, and never will be. By definition, any system with sleep and yield functionality is a multitasking system, where the requirements of other tasks have to be considered, and the operating system always gets the final call on the scheduling of this. The alternative wouldn't work reliably, because if a program could somehow demand to be reactivated at a precise time of its choosing it could starve other processes of CPU power. (eg. A program that spawned a background thread and had both threads performing 1ms of work and calling sleep(1) at the end could take turns to hog a CPU core.) Thus, for a user-space program, sleep (and functionality like it) will always be a lower bound, never an upper bound. To do better than that requires the OS itself to allow certain apps to pretty much own the scheduling, and this is not a desirable feature in operating systems for consumer hardware (while being a common and useful feature for industrial applications).

Answer #8965341, count #5, created: 2012-01-22 22:35:45.0

Thread.sleep is inaccurate and makes the animation jittery most of the time.

If you replace it completely with Thread.yield you'll get a solid FPS without lag or jitter, however the CPU usage increases greatly. I moved to Thread.yield a long time ago.

This problem has been discussed on Java Game Development forums for years.

Title: When should recurring software timers fire in relation to their previous timeout? Id: 5413034, Count: 65 Tags: Answers: 4 AcceptedAnswer: 5415705 Created: 2011-03-23 23:42:16.0 Body:

I think this is one of those "vi vs. emacs" type of questions, but I will ask anyway as I would like to hear people's opinions.

Often times in an embedded system, the microcontroller has a hardware timer peripheral that provides a timing base for a software timer subsystem. This subsystem allows the developer to create an arbitrary (constrained by system resources) number of timers that can be used to generate and manage events in the system. The way the software timers are typically managed is that the hardware timer is setup to generate at a fixed interval (or sometimes only when the next active timer will expire). In the interrupt handler, a callback function is called to do things specific for that timer. As always, these callback routines should be very short since they run in interrupt context.

Let's say I create a timer that fires every 1ms, and its callback routine takes 100us to execute, and this is the only thing of interest happening in the system. When should the timer subsystem schedule the next handling of this software timer? Should it be 1ms from when the interrupt occurred, or 1ms from when the callback is completed?

To make things more interesting, say the hardware developer comes along and says that in certain modes of operation, the CPU speed needs to be reduced to 20% of maximum to save power. Now the callback routine takes 500us instead of 100us, but the timer's interval is still 1ms. Assume that this increased latency in the callback has no negative effect on the system in this standby mode. Again, when should the timer subsystem schedule the next handling of this software time? T+1ms or T+500us+1ms?

Or perhaps in both cases it should split the difference and be scheduled at T+(execution_time/2)+1ms?

Popularity: 9.0 Answer #5414414, count #1, created: 2011-03-24 03:26:37.0

I would have the hardware timer fire every 1ms. I've never heard of a hardware timer taking in such a quick routine into account. Especially since you would have to recalculate every time there was a software change. Or figure out what to do when the CPU changes clock speeds. Or figure out what to do if you decide to upgrade/downgrade the CPU you're using.

Answer #5415705, count #2, created: 2011-03-24 06:50:04.0

In a real-time OS both timers and delays are synchronised to the system tick, so if the event processing takes less than one timer tick, and starts on a timer tick boundary, there would be no scheduling difference between using a timer or a delay.

If on the other hand the processing took more than one tick, you would require a timer event to ensure deterministic jitter free timing.

In most cases determinism is important or essential, and makes system behaviour more predictable. If timing were incremental from the end of processing, variability in the processing (either static - through code changes, or run-time through differencing execution paths), might lead to variable behaviour and untested corner cases that are hard to debug or may cause system failure.

Answer #5427445, count #3, created: 2011-03-25 01:02:52.0

Adding another couple of reasons to what is at this point the consensus answer (the timer should fire every 1ms):

  • If the timer fires every 1ms, and what you really want is a 1ms gap between executions, you can reset the timer at the exit of your callback function to fire 1ms from that point.

  • However, if the timer fires 1ms after the callback function exits, and you want the other behavior, you are kind of stuck.

Further, it's far less complicated in the hardware to fire every 1ms. To do that, it just generates events and resets, and there's no feedback from the software back to the timer except at the point of setup. If the timer is leaving 1ms gaps, there needs to be some way for the software to signal to the timer that it's exiting the callback.

And you should certainly not "split the difference". That's doing the wrong thing for everyone, and it's even more obnoxious to work around if someone wants to make it do something else.

Answer #5461812, count #4, created: 2011-03-28 16:06:11.0

My inclination is to have the default behavior be to have a routine start at intervals that are as nearly uniform as practical, and to have a routine which is running late try to "catch up", within limits. Sometimes a good pattern can be something like:

 /* Assume 32,768Hz interrupt, and that we want foo() to execute 1024x/second */ typedef unsigned short ui; /* Use whatever size int works out best */ ui current_ticks; /* 32768Hz ticks */ ui next_scheduled_event; ui next_event; void interrupt_handler(void) { current_ticks++; ... if ((ui)(current_ticks - next_event) EVENT_INTERVAL*EVENT_MAX_BACKLOG) /* We're 32 ticks behind -- don't even try to catch up */ { delta = EVENT_INTERVAL*EVENT_MAX_BACKLOG; next_scheduled_event = current_ticks - delta; } next_scheduled_event += EVENT_INTERVAL; next_event = next_scheduled_event; foo(); /* See how much time there is before the next event */ delta = (ui)(current_ticks - next_event - EVENT_MIN_SPACING); if (delta > 32768) next_event = current_ticks + EVENT_MIN_GAP; } 

This code (untested) will run foo() at a uniform rate if it can, but will always allow EVENT_MIN_SPACING between executions. If it is sometimes unable to run at the desired speed, it will run a few times with EVENT_MIN_SPACING between executions until it has "caught up". If it gets too far behind, its attempts to play "catch up" will be limited.

Title: DeactivateDevice vs. IOCTL_BUS_DEACTIVATE_CHILD Id: 5598995, Count: 66 Tags: Answers: 1 AcceptedAnswer: 5600791 Created: 2011-04-08 18:01:40.0 Body:

I am trying to understand the pros and cons of using DeactivateDevice vs. IOCTL_BUS_DEACTIVATE_CHILD, to unload a device driver, in terms of power consumption. If I would like to check the power consumption of the device when the driver was 1)loaded and 2)unloaded, which one would give me the most appropriate value in the latter case? Please suggest and let me know if I am missing any more info needed to answer my question. TIA.

EDIT: Also, it might be helpful in evaluating the answer for the above question if we know how the above two ways to unload a driver are fundamentally different.

Popularity: 2.0 Answer #5600791, count #1, created: 2011-04-08 21:08:09.0

It depends on what you want to do. IOCTL_BUS_ACTIVATE_CHILD is for bus drivers only (USB, PCI, etc...). upper level client drivers will use ActivateDeviceEx.

There is no equivalent wrapper function in the DDK for IOCTL_BUS_ACTIVATE_CHILD.

see: http://blogs.msdn.com/b/ce_base/archive/2007/04/19/how-bus-drivers-work.aspx

You can also use SetDevicePower to change the power state of a given physical device. (like the WiFi, BT, screen, etc...)

-PaulH

Title: Android is there a way to set default timeout for an Activity and detect when this has been reached? Id: 5644118, Count: 67 Tags: Answers: null AcceptedAnswer: null Created: 2011-04-13 03:35:58.0 Body:

Need to detect when my activity has not seen any clicks for more than say 20 seconds. Is there a default timeout on this that can be set and when it expires can the event be caught. I guess this is really asking if onPause has a setting that effects it? Or could it just get called whenever Android thinks it needs to be paused. I need to save power after 20 seconds so I really need a way to detect this. Thanks

Popularity: 7.0 Title: Manually controlling framerate of Cocos2d-iPhone game strategy Id: 5680384, Count: 68 Tags: Answers: 3 AcceptedAnswer: 5684317 Created: 2011-04-15 17:38:51.0 Body:

Most game developers should have encountered the "low framerate" issue at least once when developing games. But I, on the other hand, am making a sudoku-like puzzle game where low framerate is not a problem since it does not have constantly moving sprites/elements, and in fact I plan to decrease the framerate so that the game will take less CPU time and hence reduce the power consumption of the iDevices; all this just as a courtesy to the players :)

I already know that I can control the framerate in Cocos2d-iphone by modifying animationInterval:

[[CCDirector sharedDirector] setAnimationInterval:1.0/60]; 

But I'm having troubles on the strategy on when to lower the framerate and when to revert to the normal framerate (60Hz). At this point I only managed to define the strategy for touch event, that is:

Start with lower framerate. On ccTouchBegan, revert to normal framerate. On ccTouchEnded, switch to lower framerate. Make sure multitouch is handled accordingly.

I'm left with two more conditions:

  1. Handling CCActions on CCNodes: as long as some CCNodes have running actions, revert to normal framerate.

  2. Particle system: as long as there exist some particle systems that are emitting particles, revert to normal framerate.

Basically I need to be able to detect if there are actions that are still running on any sprites/layers/scene, also if some particle systems are still emitting particles. I prefer to not having the checking done on individual objects, and I'd rather have a simple [SomeClass isActionRunning]. I imagine that I might be able to do this by checking the list of "scheduled" object but I'm not sure how. I'd appreciate if you could also suggest some other conditions where I need to revert to normal framerate.

Popularity: 12.0 Answer #5681550, count #1, created: 2011-04-15 19:41:24.0

though i know it's not a very clean way but you can hack CCScheduler class and check if there are any object in scheduledMethods. and i guess you have to check if the objects there are yours since cocos2d itself schedule some classes.

Answer #5684317, count #2, created: 2011-04-16 03:42:28.0

Hmm.. I would recommend that you set it to 30fps.. Yes.. Its the rate that screen refreshes.. But the most important is how you code the game.. It must be efficient.. Rather than running extra processes checking if something is running or not.. It may eat up slightly more processing power..

Answer #15696410, count #3, created: 2013-03-29 02:23:51.0

This might be what you are looking for, at least regarding actions.

When you run an action you can call a block at the end of your action, in which you reset the frame rate.

[[CCDirector sharedDirector] setAnimationInterval:1.0/60];//set your fast frame rate [self runAction:[CCSequence actions: [CCMoveBy actionWithDuration:0.5f position:ccp(100,100)], //Do whatever action it is you want to do [CCCallBlock actionWithBlock:^ { [[CCDirector sharedDirector] setAnimationInterval:1.0/30]; //Revert to slow frame rate... could also check within the block for other actions }], nil]]; 
Title: Windows CE device powering off Randomly Id: 5701909, Count: 69 Tags: Answers: 2 AcceptedAnswer: 5703711 Created: 2011-04-18 11:03:17.0 Body:

I have a touch screen device that is running on windows CE. after 30 second the screen goes off to save power and will come back on it you touch it.

The problem is that randomly when the screen goes off the device will not come back on simply by touching the screen. I have a done a bunch of tests and there is no noticeable pattern to when this happens.

It appears to be performing the same action as when you press the suspend button from the main menu.

I have done some research and found there are 4 power saving settings in the registry and I think I need to disable one to stop the device from "suspending". I never want the device to turn off except for the screen going off, it is always connected to power.

Does anyone know how I can do this or why it is randomly suspending ?

And the entire device is in Chinese So really precise instructions would be appreciated. My application runs on top of the CE.

Popularity: 7.0 Answer #5703711, count #1, created: 2011-04-18 13:36:19.0

I know you're after precise instructions, but it's not that simple. The device OEM defined and implemented the power management system for the device, Microsoft only provided the structure for it. The OEM could have implemented power management in any way they sought fit,, and in fact they could have completely ignore the Microsoft-provided framework (wouldn't be the first time an OEM did that). Really you need to get a hold of the OEM and ask them how to prevent the behavior you're seeing or to get something different.

Barring that, you could always play around with the registry entries, but again, there's no guarantee any of them will work. You might look at adjusting power state or the activity timer registry entries.

Playing with the power manager control panel applet might also help (it's probably labelled 电源管理)

Answer #6146897, count #2, created: 2011-05-27 01:27:05.0

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\Timeouts] "BattSuspend"=dword:0

Title: Is it ok to update a widget frequently for a short period of time? Id: 5765212, Count: 70 Tags: Answers: 2 AcceptedAnswer: 5765281 Created: 2011-04-23 15:53:34.0 Body:

in numerous places, it is mentioned that app widgets should not get updated often, to minimize power consumption.

But, let's consider that an application is doing something important (such as audio recording) for a short period of time, say 30min.

In this case, is it acceptable to update the widget every second from a service?

How can it be that this would consume so much power?

Please consider that this is different from a widget which would update very often during the whole day.

And in my case, such frequent updates would be meant to allow the user to monitor that the operation is being performed continuously and correctly. It's not for fancy visual effects and such.

Popularity: 9.0 Answer #5765281, count #1, created: 2011-04-23 16:04:57.0

I don't see a problem with doing this; if you're keeping the phone awake with a long-running background task (audio recording in this case), then the phone can't sleep anyway. I wouldn't expect updating the widget to have a significant impact on battery use in this case.

Of course, the best thing to do is to run some tests on a real device, and compare battery use with and without widget updates, and make widget update interval a user preference.

Answer #5765304, count #2, created: 2011-04-23 16:08:55.0

The main reason widgets shouldn't update constantly is because of the battery consumption used to get the latest data from a server. Since the device will be on anyway, and the update is local to your data, it shouldn't have an impact that is noticeable.

If you were hitting a server instead of local data every second for that long, you would notice a significant draw on the battery.

Title: Detecting USB power state Id: 5884820, Count: 71 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-05-04 14:17:21.0 Body:

Windows has the option of powering down certain peripherals, such as USB ports, to save power (this behavior can be enabled/disabled via Device Manager). The power down happens under various conditions such as when the lid of a laptop is closed. This is causing a problem for me as I have a GUI which talks to hardware attached to the USB port and communications are severed every time the laptop lid is closed. Is there a way to programmatically detect this power-down (standby?) event before it happens and more gracefully shut down my USB device? Is there a way to programmatically configure each of the system’s USB ports to disable this behavior?

Right now I'm looking at SystemEvents.PowerModeChanged, is this the right event to detect this?

Popularity: 17.0 Answer #5885138, count #1, created: 2011-05-04 14:36:11.0

It sounds like you want

  1. WM_POWERBROADCAST (http://msdn.microsoft.com/en-us/library/aa373247(v=vs.85).aspx)
  2. RegisterPowerSettingNotification (http://msdn.microsoft.com/en-us/library/aa373196.aspx)

You first need to call RegisterPowerSettingNotification then WM_POWERBROADCAST messages will be received by your application.

This page has a c# implementation of a power management class using these window messages. http://www.koders.com/csharp/fid00BAA34B0CAA3E320F9F5A44610A015973BF28ED.aspx?s=nativemethods#L175

Answer #7345733, count #2, created: 2011-09-08 09:14:22.0

As mentioned by the previous posters RegisterPowerSettingNotification is what you want. To clarify, you can reference Winforms (System.Windows.Forms.dll) from other types of .NET applications (console, etc). You can get access to a Window handle (in order to receive messages) by subclassing a Winform (the Forms class) and overriding its WndProc.

MSDN has a very good article of doing just that, along with example code.

Title: Control access to files based on DB values with PHP/Apache Id: 6030750, Count: 72 Tags: Answers: 4 AcceptedAnswer: 6030935 Created: 2011-05-17 12:26:31.0 Body:

What i want

I'm making a system where, when a user uploads an image it goes to the folder images, where there is two copies of the image, a thumb and a bigger one.

Now, when a user uploads an image it's done alongside insertion of a row in a MySQL database, where the image gets an id, and the row holds a value that determines if the images is available to the public or not (1 for only thumb available, 2 for both) along with the owner(admin) of the image.

What i wan't to do is, if a user tries to reach the image by going to the image's URL, and the value in the database says that it should not be publicly available, and the session of that user is not the owner of the image, (admin not logged in) it should either say image not found or access denied.

What i imagine

If it could be done with php i imagine something like this:

//User tries to go to URL "images/IMAGE-ID_thumb.jpg" if($img['access'] >= 1 || $_SESSION['admin_ID'] == $img['owner_ID']) { //Show image thumb } else { //Access denied } 

I could maybe hide the images in a .htaccess protected folder, make a imgae_get.php that accepts URL variables, sees if the image is available, and loads the image if so. I just want to avoid having to de/re-compile the image and other server power consuming processes.

My question is, can i control the image like this? Is there a way to make a php script appear as an image it loads internally? Or is there maybe some other/better way than through PHP? (Apache maybe?) Any suggestions is much appreciated.

Popularity: 18.0 Answer #6030935, count #1, created: 2011-05-17 12:43:52.0
//get_image.php //Do session/mysql checks as outlined in your code above. header("Content-Disposition: filename=$imagename.$filetype"); header("Content-type: $mimetype"); readfile(PROTECTED_IMAGE_FOLDER . "$imagename.$filetype"); 
Answer #6030960, count #2, created: 2011-05-17 12:46:09.0

You can redirect all requests for certain filetypes (or filenames, anything you can regex) with an '.htaccess' file in your DocumentRoot:

RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule \.(gif|jpg|jpeg|png)$ imageHandler.php [NC,L] 

You can determine the original request from the $_SERVER superglobal, and use the appropriate headers to alert the browser you're returning an image. Then you read the image from disk (with file_get_contents() or similar) and write it to the browser with echo, in the same way you'd output HTML.

Answer #6031001, count #3, created: 2011-05-17 12:49:48.0

"mod_rewrite" all attempts to access image via a fake path name to PHP file with the image file name as a query param and then do your magic there. Storing the images outside of the public folder should prevent any unwanted public access. Alternatively store the actual image in the DB itself.

Answer #6031172, count #4, created: 2011-05-17 13:05:29.0
mod_rewrite RewriteRule ^(.*)_(full|thumb).(jpg|png|gif)$ image.php?i=$1&type=$2&ext=$3 <img src="http://example.com/image-1_thumb.png" /> = <img src="image.php?i=image-1&type=thumb&ext=jpg" /> <?php ##image.php ##Do DB Checks on session to owner ect /* $_GET [i] => image-1 [type] => thumb [ext] => jpg */ echo (file_exists('hiddenimagesfolder/images/'.$_GET['i'].'_'.$_GET['type'].'.'.$_GET['ext'])) ? header("Content-Type:image/png").readfile('hiddenimagesfolder/images/'.$_GET['i'].'_'.$_GET['type'].'.'.$_GET['ext']) : false; ?> 
Title: Android Power Profiler Id: 6036632, Count: 73 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-05-17 20:22:21.0 Body:

I need to perform power measurements for android applications. I tried "powertutor" and it gives the power consumption per every application. Yet, I don't know how accurate its readings are. Does anyone know how accurate it is?

Also, I have used the DDMS to profile the android application. I obtain the processes as memory info about it. Is there a way that i can know the power consumption per process in Android? (some rough estimation?) or is it impossible?

I really need to perform "power" profiling for android applications but I don't know how.

Popularity: 21.0 Answer #6362166, count #1, created: 2011-06-15 18:02:43.0

In my academic research for measuring power consumption on Android, we use a power supply hooked up to the phone's battery terminals that outputs the voltage and current to a PC. Measure without the app to get a baseline and then compare against measurements with the app running. It's not extremely accurate, but it's the best way we know how.

Title: per process power consumption in Android Id: 6051807, Count: 74 Tags: Answers: 2 AcceptedAnswer: 15014368 Created: 2011-05-18 22:34:32.0 Body:

Is there a way to see the power consumption of an Android process? I have a rooted HTC Hero, and I have developed some native programs in C language. I want to see the power consumptions of these programs. So, I want a way to measure the power consumption in process level and not in application level as e.g. PowerTutor application does..

Is there an API that can help me develop an application that can do this thing? Can I use /proc/ stats etc.?

any ideas on this?

Popularity: 32.0 Answer #6051842, count #1, created: 2011-05-18 22:39:25.0

As said here: Android Battery usage profiling

There is a private API, PowerProfile, for retrieving battery consumption on a subsystem level (see http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.2_r1.1/com/android/internal/os/PowerProfile.java). Also take a look at the code for the fuel gauge you find in Android's settings on how they calculate power consumption: http://google.com/codesearch/p?hl=en#ohAXAHj6Njg/src/com/android/settings/fuelgauge/PowerUsageSummary.java

Answer #15014368, count #2, created: 2013-02-21 23:43:05.0

AppScope is an application energy metering framework for Android smartphones using kernel activity monitoring. AppScope provides an accurate and detailed both per-application and per-process energy estimation solution.

The current release supports HTC Nexus One only and can be found here.

For more information and technical details about the AppScope framework you can read the AppScope paper presented at USENIX ATC '12, here.

Title: What is the relation between CPU utilization and energy consumption? Id: 6128960, Count: 75 Tags: Answers: 2 AcceptedAnswer: 6209953 Created: 2011-05-25 18:19:11.0 Body:

What is the function that describes the relation between CPU utilization and consumption of energy (electricity/heat wise).

I wonder if it's linear/sub-linear/exp etc..

I am writing a program that decreases the CPU utilization/load of other programs and my main concern is how much do I benefit energy wise..

Moreover, my server is mostly being used as a web-server or DB in a data-center (headless).

In case the data center need more power for cooling I need to consider that as well.. I also need to know what is the effect of CPU utilization on the entire machine power consumption ..

Popularity: 34.0 Answer #6129136, count #1, created: 2011-05-25 18:32:29.0

For the CPU alone linear would be the most likely.
It gets complicated with CPUs that can reduce the clock speed under low load (like laptops) but for a server it's probably a good approximation.
Remember though that the CPU isn't the only component - you have to multiply by the percentage of power the CPU is using compared to the entire system.

Answer #6209953, count #2, created: 2011-06-02 03:04:07.0

Here you can find a short ppt answering your questions, and providing additional info.

Although there is no Copyright notice in the ppt, the work is probably protected, so I will copy here only three graphs relevant to your main question and follow-ups in comments.


enter image description here


enter image description here


enter image description here


HTH!

Title: Is it possible to force the app from going into power save mode? Id: 6213686, Count: 76 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-06-02 11:02:48.0 Body:

How to force the app from going into power save mode in Android?

Popularity: 5.0 Answer #6214500, count #1, created: 2011-06-02 12:28:53.0

You want a wake lock. See:

http://developer.android.com/reference/android/os/PowerManager.WakeLock.html

Title: Possibilities to reduce power consumption with cocos2d apps Id: 6253187, Count: 77 Tags: Answers: 3 AcceptedAnswer: 6350748 Created: 2011-06-06 14:02:21.0 Body:

I made a board game with includes just some little animations. I reduced the fps from 60 to 30 to reduce the processor load. But the device still gets very warm. Another application made without cocos2d is not heating it so much. Are there any methods to calm the iPhone down? The device state is as follows:

  • Wifi is always enabled
  • The app uses gamecenter
  • GPS is inactive
  • fps is always on 30
  • I use cocos2d-iphone as engine
Popularity: 13.0 Answer #6254246, count #1, created: 2011-06-06 15:21:23.0

I've noticed that a sequence of small time animations in cocos2d takes a lot of processor time. I've tried making tips at the level which will pulse in size. 0.1 second pulse up, 0.15 down and 0.2 stay. And i've put it all in a repeat forever sequence. Everything was terribly slow. Then i've just made the animation manually and the device calmed down and fps increased back to 60

Answer #6350748, count #2, created: 2011-06-14 22:11:21.0

It might be worth experimenting with different director types, e.g. kCCDirectorTypeNSTimer, and seeing if that helps at all. Those will have the biggest effect on the main loop of the game.

You should also spend some time with Instruments if you've not already, as that will show you where the CPU is spending its time and give you some hints on where you could ease things up.

Answer #6350829, count #3, created: 2011-06-14 22:19:17.0

When showing menus or dialogs that do not require animation, you can actually lower your framerate even further.

Title: Multiple bluetooth socket connect/close operations reboot android phones Id: 6380887, Count: 78 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-06-17 02:55:21.0 Body:

We are using android phones to communicate with sensors via bluetooth. The phone needs to connect the sensor periodically to collect physiological data and between two connections the sensors can be switched off automatically to save power.

Now the problem is: after around 500 times, the system reboots. We then wrote a small piece of test program to simulate the whole process. The small test program, too, crashes the android phone.

Can anybody please help me on this ? Thanks! Here is the small test program.

package zhb.test.MhubTestBtConnect; import java.io.IOException; import java.util.Date; import java.util.UUID; import android.app.Activity; import android.bluetooth.BluetoothAdapter; import android.bluetooth.BluetoothDevice; import android.bluetooth.BluetoothSocket; import android.os.Bundle; import android.os.Handler; import android.os.Looper; import android.os.Message; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class MhubTestBtConnect extends Activity implements OnClickListener{ /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button btnStart = (Button) findViewById(R.id.btn_start_test); btnStart.setOnClickListener(this); Button btnStop = (Button) findViewById(R.id.btn_stop_test); btnStop.setOnClickListener(this); fBeginView = (TextView) findViewById(R.id.begin_time); fBeginView.setText("off line"); fEndView = (TextView) findViewById(R.id.end_time); fEndView.setText("off line"); fShowView = (TextView) findViewById(R.id.show_status); fShowView.setText("off line"); fShowRunTimesView = (TextView) findViewById(R.id.show_times_status); fShowRunTimesView.setText("off line"); fInputMac = (EditText) findViewById(R.id.edit_add_mac); fInputMac.setText("00:19:5D:24:CB:A9"); fDisconnectGap = (EditText) findViewById(R.id.edit_disconnect_gap); fDisconnectGap.setText("500"); fconnectGap = (EditText) findViewById(R.id.edit_connect_gap); fconnectGap.setText("1000"); fRunning = false; } @Override public void onClick(View v) { switch ( v.getId() ) { case R.id.btn_start_test: start(); break; case R.id.btn_stop_test: stop(); break; default: break; } } private synchronized void start() { if ( fRunning == false ) { fRunning = true; fMac = fInputMac.getText().toString().toUpperCase(); fConnectRunnable = new ConnectRunnable(); fBeginView.setText(new Date().toLocaleString()); fConnectTimes = 0; fRunOkTimes = 0; new Thread(fConnectRunnable).start(); } } private synchronized void stop() { if ( fRunning == true ) { fRunning = false; fConnectRunnable.cancel(); try { Thread.sleep(500); } catch (InterruptedException exception) { exception.printStackTrace(); } fEndView.setText(new Date().toLocaleString()); } } private void connect() { Log.d(TAG,"---------- run "+fConnectTimes++ +" times"); String UUID_STRING = "00001101-0000-1000-8000-00805F9B34FB"; // may throws exception UUID uuid = UUID.fromString(UUID_STRING); // get adapter BluetoothAdapter adapter = BluetoothAdapter.getDefaultAdapter(); // get remote device BluetoothDevice btDevice = adapter.getRemoteDevice(fMac); if ( btDevice == null ) { Log.e(TAG,"can't get remote device from given MAC : " + fMac); return; } int runResult = 0; // may throws exception fBtSocket = null; try { fBtSocket = btDevice.createRfcommSocketToServiceRecord(uuid); if ( adapter.isDiscovering() == true ) { adapter.cancelDiscovery(); Log.d(TAG,"cancel discover"); } fBtSocket.connect(); runResult = 1; Log.d(TAG,"connect socket OK"); Log.d(TAG,"---------- run OK "+fRunOkTimes++ +" times"); } catch (IOException exception) { //adapter.cancelDiscovery(); Log.d(TAG,"connect socket error",exception); if ( fBtSocket != null ) { try { fBtSocket.close(); } catch (IOException exception1) { Log.d(TAG,"close socket error",exception1); } } else { Log.d(TAG,"create socket NULL"); } btDevice = null; //update ui Message msgRun = new Message(); msgRun.what = R.id.show_status; msgRun.arg1 = SV_START_RUN; msgRun.arg2 = runResult; fMessHandler.sendMessage(msgRun); } //Note: You should always ensure that the device is not performing device discovery when you call connect(). //If discovery is in progress, then the connection attempt will be significantly slowed and is more likely to fail. //adapter.cancelDiscovery(); } private void close() { //update ui Message msgShut = new Message(); msgShut.what = R.id.show_status; msgShut.arg1 = SV_SHUT_DOWN; fMessHandler.sendMessage(msgShut); if ( fBtSocket != null ) { Log.d(TAG,"close socket"); try { fBtSocket.close(); } catch (IOException exception1) { Log.d(TAG,"close socket error",exception1); } fBtSocket = null; } } private Handler fMessHandler = new Handler() { public void handleMessage(android.os.Message msg) { switch ( msg.what ) { case R.id.show_status: if ( msg.arg1 == SV_START_RUN ) { if ( msg.arg2 == 1) fShowView.setText("run OK"); else fShowView.setText("run Failed"); fShowRunTimesView.setText("run "+fConnectTimes+", OK "+fRunOkTimes); } else if ( msg.arg1 == SV_SHUT_DOWN ) fShowView.setText("shut down ..."); break; default: break; } }; }; private long fConnectTimes; private long fRunOkTimes; private static final String TAG = "..MhubTestBtConnect"; private BluetoothSocket fBtSocket; private ConnectRunnable fConnectRunnable; private String fMac; private boolean fRunning; private TextView fBeginView; private TextView fEndView; private TextView fShowView; private TextView fShowRunTimesView; private EditText fInputMac; private EditText fDisconnectGap; private EditText fconnectGap; private static final int SV_START_RUN = 1; private static final int SV_SHUT_DOWN = 2; private class ConnectRunnable implements Runnable { public ConnectRunnable() { fCancelled = false; } @Override public void run() { if (Looper.myLooper() == null) { Looper.prepare(); } long afterConnectSleep = 500; long afterCloseSleep = 1000; try { afterCloseSleep = Integer.parseInt(fconnectGap.getText().toString()); afterConnectSleep = Integer.parseInt(fDisconnectGap.getText().toString()); } catch(Exception exception) { afterConnectSleep = 500; afterCloseSleep = 1000; } while ( fCancelled == false ) { connect(); try { Thread.sleep(afterConnectSleep); } catch (InterruptedException exception) { exception.printStackTrace(); } close(); try { Thread.sleep(afterCloseSleep); } catch (InterruptedException exception) { exception.printStackTrace(); } } } public void cancel() { fCancelled = true; } private boolean fCancelled; } } 
Popularity: 19.0 Answer #6381221, count #1, created: 2011-06-17 04:06:46.0

This is a bug I reported a while back regarding failed bluetooth connects (512 to be exact) and a memory leak leading to "referencetable overflow. I'll dig up the link when I'm back at my PC =)

Link: http://code.google.com/p/android/issues/detail?id=8676

Solution: avoid failed bluetooth connects by performing a Bluetooth discovery first to see if the device is in range. If so, cancel discovery and connect to it.

Title: Android wifi script problems Id: 6487692, Count: 79 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-06-27 00:36:35.0 Body:

I have a problem that has been nagging me to an extreme extend in the past few days. I would like to write an Android sh script that does the following (to help me sync music, pics etc.):

1) Turn on wifi (wifi is off by default to save power)

2) Check if my wifi connection is in range (lets call it myWifi)

3) If myWifi is not in range, disable wifi, if it is in range, connect and start some synch software

Now, to enable / disable wifi, I use the following command, which requires root:

svc wifi enable / disable

To scan for avaible wifi connections, I use the following command:

iwlist eth0 scan

The strage thing is, that iwlist eth0 scan will only work if I am NOT logged in as root (I am very curious to why this is the case, if anyone knows anything?), running it while root will return:

eth0: Interface doesn't support scanning : Invalid argument

but running it while not logged in as root, will give me the info I need. I have tried different approaches to get around this problem. The most obvious one is logging in as the standard user in the Android system right before invoking the iwlist command:

su -c app_1

However, any command that involves su will return permission denied even when invoking it as root, and since sudo does not exist in Android, I feel pretty lost here. I did also try a workaround involving splitting the script into two parts, and trying to run the first as root and the second as non root (the default user in Android i app_1), but this will only delay the problem...

If anyone has an answer to how to either get around this user problem, or how to use iwlist eth0 scan (or another command that does the same) while logged in as root, I would be very gratefull.

Thank you.

Popularity: 20.0 Answer #7153157, count #1, created: 2011-08-22 20:23:24.0

According to man iwlist normal users can only see some left-over scanning results. To initiate a new scan as root you first need to start up your interface (after starting wifi):

ifconfig wlan0 up 
Title: How to measure memory bandwidth currently being used on Linux? Id: 6503385, Count: 80 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-06-28 08:01:37.0 Body:

I'm writing a small Linux application which logs the computer's power consumption along with CPU utilisation and disk utilisation. I'd like to add the ability to log memory bandwidth currently being used so I can see how well that correlates with a power consumption.

I understand that I can get information about the amount of memory currently allocated from /proc/meminfo but, of course, that doesn't tell me how much bandwidth is being used at present. Does anyone know how I could measure memory bandwidth currently in use?

edit I'd like this to work primarily on the x86 and x86-64 platforms

Popularity: 30.0 Answer #6503455, count #1, created: 2011-06-28 08:08:54.0

It's highly CPU-dependent but you'll need to be able to get access to the CPU's performance registers. You may be able to do this via oprofile. Note that not all CPUs have a performance register (or combination of registers) which can be used to calculate to memory bandwidth usage, however.

Title: Simulating Poisson Waiting Times - Java Id: 6527345, Count: 81 Tags: Answers: 2 AcceptedAnswer: 6527433 Created: 2011-06-29 21:15:20.0 Body:

I need to simulate Poisson wait times. I've found many examples of simulating the number of arrivals, but I need to simulate the wait time for one arrival, given an average wait time.

I keep finding code like this:

public int getPoisson(double lambda) { double L = Math.exp(-lambda); double p = 1.0; int k = 0; do { k++; p *= rand.nextDouble(); p *= Math.random(); } while (p > L); return k - 1; } 

but that is for number of arrivals, not arrival times.

Efficieny is preferred to accuracy, more because of power consumption than time. The language I am working in is Java, and it would be best if the algorithm only used methods available in the Random class, but this is not required.

Thank you for reading this, I've searched online but can only find simulations for number of arrivals. I wish I paid more attention in stats.

Popularity: 22.0 Answer #6527433, count #1, created: 2011-06-29 21:23:37.0

Time between arrivals is an exponential distribution, and you can generate a random variable X~exp(lamda) with the formula:

-ln(U)/lamda` (where U~Uniform[0,1]). 

More info on generating exponential variable.

Note that time between arrival also matches time until first arrival, because exponential distribution is memoryless.

Answer #15307412, count #2, created: 2013-03-09 05:09:48.0

If you want to simulate earthquakes, or lightning or critters appearing on a screen, the usual method is to assume a Poisson Distribution with an average arrival rate λ.

The easier thing to do is to simulate inter-arrivals:

With a Poisson distribution, the arrivals get more likely as time passes. It corresponds to the cumulative distribution for that probability density function. The expected value of a Poisson-distributed random variable is equal to λ and so is its variance. The simplest way is to 'sample' the cumulative distribution which has an exponential form (e)^-λt which gives t = -ln(U)/λ. You choose a uniform random number U and plug in the formula to get the time that should pass before the next event. Unfortunately, because U usually belongs to [0,1[ that could cause issues with the log, so it's easier to avoid it by using t= -ln(1-U)/λ.

Sample code can be found at the link below.

http://stackoverflow.com/a/5615564/1650437

Title: Sending a String array through a tigase server from one Android to another using XMPP protocol Id: 6553595, Count: 82 Tags: Answers: 2 AcceptedAnswer: 6572493 Created: 2011-07-01 21:57:15.0 Body:

I'm currently trying to use the Smack libraries and the Tigase server to send a String array from one Android to another using the XMPP protocol (I'm developing in Java with the Eclipse IDE).

is the String array an Item? or something else? I might be able to also send it as a file but I think it might be more energy consuming (for the device's battery). is there a preferable way to accomplish this task?

I'm asking this firstly because there is no organized source from which I can try to find my answers independently, and secondly because It's a pretty basic task which might take me several hours to figure out as apposed to someone who might have done something like this before.

I'd be happy to receive information sources, if you don't know the answer to this particular question but you know where you would find it...

Popularity: 4.0 Answer #6553757, count #1, created: 2011-07-01 22:20:58.0

I cannot help you with Smack library, however why don't you use Tigase's JaXMPP2 instead? https://projects.tigase.org/projects/jaxmpp2 This is Java library which has been created specifically to be compatible with Android, GWT and standalone Java applications. So kind of portable Java library. This way all the software you use comes from one vendor and I am sure in such a case nice guys from Tigase would be happy to help you out.

Answer #6572493, count #2, created: 2011-07-04 13:59:48.0

This is pretty simple to do.

The simplest approach would be to simply create a chat between the two users and send the data as the message body. Since your content is just a string array it can be easily sent as a comma delimited list of strings that you can easily marshall/unmarshall at each end.

The fact that you are using tigase is irrelevant in this case as it is basic XMPP and will work with any server.

Title: Save battery power consumed by gps services in android Id: 6567346, Count: 83 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-07-04 04:40:35.0 Body:

I am developing an application which has two GPS services. One of which is location tracking which send location updates at every 2 min to server and another service is cyberseatbelt which checks for speed of device when location updates.

With these two services, battery consumption is 77% displayed in my mobile. Without these two services, no battery consumption is displaying.

Is there any solution to save battery power on device while keeping the desired functionality?

Popularity: 18.0 Answer #6567410, count #1, created: 2011-07-04 04:54:37.0

You have to tell what you have done to make the battery consumption so high. (Not even clear if battery becomes 77% after 1 minute or 15 minutes or 1 hr).

How are you accessing GPS? Are you running a handler/thread to periodically poll the GPS. If yes, this is wrong approach. You can ask Android to inform you on location changes.

GPS services are usually memory hungry. Do you need GPS services or just looking for location updates? Android comes with a good startup doco for location based services . http://developer.android.com/guide/topics/location/obtaining-user-location.html Try following the steps in this doco to find best user location.

Make sure to stop listening for updates at the appropriate time. Users will not be happy that one app tries to drain the battery even when it is not running.

Try making the app as a background task.. i.e a Service or BroadcastReceiver

Title: Android Live Wallpaper practices for performance and battery saving? Id: 6573925, Count: 84 Tags: Answers: 1 AcceptedAnswer: 6574540 Created: 2011-07-04 16:24:12.0 Body:

It's easy to find many articles discussing the implementation of Live Wallpapers for beginners, which addresses major questions involving Surfaces and so on.

But what about the professional development of Live Wallpapers? What are best practices for structuring the code the right way, to ensure good performance, low power consumption (to save battery power) and best fit different devices?

If possible, some code samples covering these issues would be great.

Popularity: 35.0 Answer #6574540, count #1, created: 2011-07-04 17:39:31.0

Power consumption...
1) The most important thing, by far, is that your wallpaper should switch itself off when it is not visible. The cube example handles this correctly, removing runnable callbacks in onDestroy(), onSurfaceDestroyed(), and onVisibilityChanged() (when visible == false).
2) Beyond that, the largest determinant of power drain will be your frame rate. A 24 fps animation will drain much more juice than a clock that just updates at 1 fps to make its sweep-second hand tick. There's no way around this, except to educate the user, so that expectations are reasonable. An action game will kill your battery whether it's an app or a live wallpaper.

Performance...
Drawing to a canvas has the benefit of simplicity, but for a very sophisticated wallpaper you will need to use OpenGL. There's GLWallpaperService, and AndEngine. The stock wallpapers are rigged to use RenderScript. And there was some talk about extending libGDX to handle wallpaper.

Best Fit...
Well, it's just like the rest of Android: you need to design your artwork in terms of scalable proportions, query the device, and adjust accordingly. For a simple wallpaper, it's usually enough to scale your artwork in onSurfaceChanged(), where you are given the width and the height as parameters. In some cases you may want to examine the full DisplayMetrics.

Useful links...
Code for stock wallpapers: http://android.git.kernel.org/?p=platform/packages/wallpapers/Basic.git;a=tree
Accessing DisplayMetrics ... search for DisplayMetrics here: http://www.codeproject.com/KB/android/AndroidLiveWallpaper.aspx
Moonblink is smart: http://code.google.com/p/moonblink/wiki/Substrate

Title: Turning "Discoverable" on Id: 6586356, Count: 85 Tags: Answers: 2 AcceptedAnswer: 6589898 Created: 2011-07-05 17:13:05.0 Body:

Does turning "Discoverable" on under Bluetooth setting increase power consumption? When it's switched on, does it constantly put the phone in inquiry scanning? If not constant, how often does it inquiry scan?

Popularity: 7.0 Answer #6586409, count #1, created: 2011-07-05 17:18:31.0

http://forums.macrumors.com/showthread.php?t=727711

This is a thread all about bluetooth and power drain. From the two minutes I took looking for it, it seems that, as one would logically think, having it on discoverable mode does drain it faster.

How much? I'm sure that you can find that information in that thread.

Answer #6589898, count #2, created: 2011-07-05 23:12:18.0

Yes it will drain power - Most devices will allow to be discoverable only for a few minutes / seconds for this reason and also due to privacy reasons. The duty cycle of the scan is configurable by the bluetooth stack and by default it scans few milliseconds and rests for a few milliseconds the actual periodicity also depends on if the device is in other connections etc. Some of the new chips have low power scan which tries to scan for presence of RF activity before doing a full scan.

Title: Explicitly Managing WiFi Power Consumption in Android Id: 6602241, Count: 86 Tags: Answers: 1 AcceptedAnswer: 6603587 Created: 2011-07-06 20:05:53.0 Body:

Background

I'm developing a research application that runs on Android phones. In short, the application runs so long as the phone is on and periodically takes information from many components and sensors on the phone. The application is to disturb the user as little as possible. That being said, it's draining the battery far too quickly and forces the user to recharge every day. This simply won't do.

To try and figure out how to improve the situation, a colleague also working on the application let the application run for a long period of time and noticed that the biggest battery hog is WiFi. My current idea is to manually shut off WiFi when it's not in use in an attempt to save power. AFAIK, Android uses PSM for WiFi to accomplish this to some end, but it doesn't seem to be enough.

Problem

Is there a way to "ramp up" Android's PSM? Or, if there is not as this question suggests, is there any way that I can safely turn WiFi on and off without adversely affecting the user? I.e., is there a way to tell which applications are using WiFi and turn it off when none are? Do standard applications - such as the web browser and email clients - use WiFi locks to prevent WiFi from being turned off when they are working?

Any advice on where to start in solving this problem are greatly appreciated. Information on how Android's PSM works, how long it takes for it to take effect, or any information relevant to the problem are very welcome.

Thanks for your time!

Popularity: 7.0 Answer #6603587, count #1, created: 2011-07-06 21:59:10.0

Is there a way to "ramp up" Android's PSM?

Not via the Android SDK.

I.e., is there a way to tell which applications are using WiFi and turn it off when none are?

The OS does this already.

Do standard applications - such as the web browser and email clients - use WiFi locks to prevent WiFi from being turned off when they are working?

Some probably do. You are welcome to search the Android source code and find out. Of course, bear in mind that there are no "standard applications" -- I presume you are thinking of the ones that are part of the Android open source project.

Any advice on where to start in solving this problem are greatly appreciated.

Find out where in your own code you are being inefficient, specifically here:

the application runs so long as the phone is on and periodically takes information from many components and sensors on the phone.

If the device behaves fine when your code is not running, and the device does not behave fine when your code is running, then the problem lies in your code. Conversely, if the device does not behave fine even when your code is not running, then something else is afoot (device defect, firmware defect, rogue application, etc.), but it probably has nothing to do with StackOverflow.

Title: Bluetooth + Android + scatternet topology Id: 6829909, Count: 87 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-07-26 12:20:49.0 Body:

i am having some conceptual troubles with bluetooth and android. As i have been researching, bluetooth permits up to 7 slaves devices connected each other to the same master device building a piconet network. Also BT permits a Master device act as a Slave device for another piconet, building a scatternet network, where all devices could be connected between each other using an upper protocol.

My questions are about Bluetooth behaviour and Android.

1) How can you know in Android that your device is acting as a master or a slave, or is a master acting as an slave for another piconet, or is slave connected to two masters forming a scatternet?

2) Bluetooth only lets 7 active slaves per master, previously selected during the Inquiry scan, and up to 255 slave devices on "park" mode (synchronized listening the master but not active). How can you connect to this "park" devices? Only if one of the active ones disconnect because a broken link for instance so one "park" can connect?

3) Can you configure on Android (or BlueZ through Android) the device to work on different modes as "sniff" or "hold" for power consumptions purposes?

4) Does Android API gives the possibility of broadcasting to all slaves of the same piconet? If it does, i hope it gives the possibility of sending custom data information.

Thank you very much in advance for your time helping me.

Popularity: 18.0 Answer #6835943, count #1, created: 2011-07-26 20:08:21.0

To add - there is one more possibility for scatternet - a Device acting as slave in more than one piconet.

1) How can you know in Android that your device is acting as a master or a slave, or is a master acting as an slave for another piconet, or is slave connected to two masters forming a scatternet?

you cannot - I don't think there is a public API - and the idea is that for applications it should not matter.

2) Bluetooth only lets 7 active slaves per master, previously selected during the Inquiry scan, and up to 255 slave devices on "park" mode (synchronized listening the master but not active). How can you connect to this "park" devices? Only if one of the active ones disconnect because a broken link for instance so one "park" can connect?

Basically - Bluetooth can connect and be in active connection with upto 7 devices, an active device can then be put in park mode and it can have a large number of device (more than 255 also) in park modes, so device moves to park from the connected - active state and vice versa. But again at any one point you can have only 7 active devices So the master device can manage a large number of devices by keeping (unto 7 ) active and rest parked and keep switching them between active and parked modes.

Having said all this - don't use park mode any more - its deprecated in the Bluetooth spec and also prone to many interoperability problems.

3) Can you configure on Android (or BlueZ through Android) the device to work on different modes as "sniff" or "hold" for power consumptions purposes?

Nothing at the application API exists for this - But typically devices switch to sniff on inactivity (controlled by the underlying bluetooth stack's policy management algorithm). Again Hold is rarely used - Sinff is the best mode typically used for power saving in Bluetooth.

4) Does Android API gives the possibility of broadcasting to all slaves of the same piconet? If it does, i hope it gives the possibility of sending custom data information.

There is again no APIs for broadcast - But yes with Bluetooth it is possible to broadcast to all active and even parked devices. Yes it can also send custom data.

But unfortunately there are no APIs for developers to exploit and use a lot of these functionalities provided by the Bluetooth technology.

Answer #18338206, count #2, created: 2013-08-20 14:51:15.0

Beddernet for Android, open-source framework that allows you communicate with a large number of devices.

Take a look on this, maybe can be helpful on your situation.

https://code.google.com/p/beddernet/

Title: Power optimization in Linux Id: 6912859, Count: 88 Tags: Answers: 2 AcceptedAnswer: 6913421 Created: 2011-08-02 13:29:30.0 Body:

I am working on a traffic surveillance project which performs various image processing tasks with a number of visual sensors and a computing platform. My basic task in the project is the power optimization/management. I am using a ZOTAC-IONITX computing platform (Intel ATOM CPU + NVIDIA ION GPU). The problems that I am currently facing are:

I am unable to model the power consumption of various components e.g., processor, GPU, hard drive, memory etc, since there seems to be no way to measure the power consumption of individual system components. Since I don't have a power consumption model, I cannot come up with a power optimization algorithm. I am currently working on Linux.

I would really appreciate any suggestions in this regard.

Popularity: 8.0 Answer #6912959, count #1, created: 2011-08-02 13:37:21.0

Can you measure total power input under various controlled conditions? Simulate the variables you can manage such as disk drive operation?

Answer #6913421, count #2, created: 2011-08-02 14:08:19.0

ACPI is designed to handle not only full system suspend/wakeup, but should work on a per-device base too. This should help you with testing the effect on overall system power consumption.

But first look at general recommendations for power management like this one for Gentoo and try the generic solutions, that others have done before.

You may already get what you want. After all ACPI is often referred to as complicated, and finding not much about selective suspend other than for USB (external) devices most likely indicates, it is not a great, or at least not an easy way to go. Depending on your expertise (in hardware and Gnu/Linux) you could still succeed, since a Linux OS tends to operate close to the hardware and is a powerful base for tricky computing operations in general.

But as Ben Voight said before, x86 is in general not the preferred platform for power-efficient applications, and you should better look for alternatives, if this is allowed inside your project task at all.

Title: How would a multithreaded program be more energy efficient? Id: 6925572, Count: 89 Tags: Answers: 6 AcceptedAnswer: 6925732 Created: 2011-08-03 11:14:23.0 Body:

In its Energy-Efficient Software Guidelines Intel suggests that programs are designed multithreaded for better energy efficiency.

I don't get it. Suppose I have a quad core processor that can switch off unused cores. Suppose my code is perfectly parallelizeable (synchronization overhead is negligible).

If I use only one core I burn one core for one hour, if I use four cores I burn four cores for 15 minutes - the same amount of core-hours either way. Where's the saving?

Popularity: 11.0 Answer #6925608, count #1, created: 2011-08-03 11:17:12.0

During that one hour, the one core isn't the only thing you keep running.

Answer #6925611, count #2, created: 2011-08-03 11:17:21.0

If a program is multithreaded that doesn't mean that it would use more cores. It just means that more tasks are dealt with in the same time so the overall processor time is shorter.

Answer #6925671, count #3, created: 2011-08-03 11:22:12.0

You burn 4 times energy with 4 cores but you do 4 times more work too! If, as you said, the synchro is negligible and the work is parallelizable, you'll spend 4 times less time.

Using multiple threads can save energy when you have i/o waits. One thread can wait while other threads can perform other computations; instead of having your application idle.

Answer #6925732, count #4, created: 2011-08-03 11:28:19.0

I suspect it has to do with a non-linear relation between CPU utilization and power consumption. So if you can spread 100% CPU utilization over 4 CPUs each will have 25% utilization - and say 12% consumption.

This is especially true when dynamic CPU scaling is used according to Wikipedia the power drain of a CPU is P = C(V^2)F. When a CPU is running faster it requires higher voltages - and that 'to the power of 2' becomes crucial. Furthermore the voltage will be a function of F (which means F can be solved for V) giving something like P = C(F^2)F. Thus by spreading the load over 4 CPUs (running at 100% capacity at that frequency) you can mitigate the cost for the same work.

We can make F a function of L (load) at 100% of one core (as it would be in your OS), so:

F = 1000 + L/100 * 500 = 1000 + 5L p = C((1000 + 5L)^2)(1000 + 5L) = C(1000 + 5L)^3 

Now that we can relate load (L) to the power consumption we can see the characteristics of the power consumption given everything on one core:

p = C(1000 + 5L)^3 p = 1000000000 + 15000000L + 75000L^2 + 125L^3 

Or spread over 4 cores:

p = 4C(1000 + (5/4)L)^3 p = 4000000000 + 15000000L + 18750.4L^2 + 7.5L^3 

Notice the factors in front of the L^2 and L^3.

Answer #6925766, count #5, created: 2011-08-03 11:31:24.0

A CPU is one part of a computer. It has fans, a motherboard, hard drives, graphics card, RAM etc, lets call this the BASE. If your doing scientific computing (i.e., a compute cluster) you are powering many computers. If you are powering 100's of BASE's anyway, why not allow those BASES to have multiple physical CPU's on them so those CPU's can share the resources of the BASE, physical and logical.

Now INTEL's marketing blurb probably also depends on the fact that these days, each CPU wafer contains multiple cores. Powering multiple physical CPU's is different to powering a single physical cpu with multiple cores.

So if amount of work done per unit of power is the benchmark in question, then modern CPU's performing highly parallel tasks then yes you get more bang for your buck, compared with the previous generation of processors. As not only can you get more cores / cpu, it is also common to get BASE's which can take multiple CPU's.

One may easily assert that one top-end system can now house the processing power of 8-16 singl-cpu single-core CPU's of the past (assuming that in this hypothetical case, that on the new system and the older generation system, each core has the same processing power ).

Answer #6933070, count #6, created: 2011-08-03 20:45:38.0

There are 3 reasons, two of which have already been pointed out:

  1. More overall time means that other (non-CPU) components need to run longer, even if the net calculation for the CPU remains the same
  2. More threads mean more things are done at the same time (because stalls are used for something useful), again the overall real time is reduced.
  3. The CPU power consumption for running the same calculations on one core is not the same. Intel CPUs have a built-in clock boosting for single-core usage (I forgot the marketing buzzword for it). A higher clock means dysproportionally more power consumption and dysproportionally more heat, which again requires the fan to spin faster, too.

So in summary, you consume more power with the CPU and more power for cooling the CPU for a longer time, and you run other components for a longer time, too.

As a 4th reason, one could allege (note that this is only an assumption!) that Intel CPUs are hyperthreaded, and since hyperthreaded cores share some resources, running two threads at once is more efficient than running one thread twice as long.

Title: powerTutor, Android application battery consumption measurement Id: 6941251, Count: 90 Tags: Answers: null AcceptedAnswer: null Created: 2011-08-04 12:16:54.0 Body:

For my study I need to measure the bettery power consumption of my application. After searching, I found PowerTutor app. My question is: is using this app enough for measring the battery consumption? Or do I need more information?

Thank you

Popularity: 6.0 Title: Setting airplane mode does not completely work Id: 7086698, Count: 91 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-08-17 00:24:58.0 Body:

I've written the code below to set the phone into airplane mode to save power. The devices is being used as a WiFi-Hotspot to relay data from some sensors in a village in Indonesia. The sensors send their data at the same time so I just need to come out of airplane mode for five minutes at midnight and then reenter airplane mode.

The problem is the cellular radio is not shut off and the airplane icon does not appear. Though the the phone reports its status as airplane_mode on, it is still possible to call it. Other widgets in the marketplace seem to fare no better. I've tried "Airplane Mode Wi-Fi Tool". It too can not get the airplane icon to appear nor disable cell radio. When watching LogCat while using the device settings to go to Airplane mode, I can see that much more is happening than when trying from the program.

If I load my program on a Droid, this code works as expected. AIRPLANE_MODE_RADIOS is set to cell, bluetooth, wifi.

The offending device is a Samsung Galaxy 5, I5500 tested with:

-Froyo 2.2 build FROYO.UYJP2 -Froyo 2.2.1 build FROYO.UYJPE

One interesting side note: if I programmatically set airplane mode and then power cycle the device, it comes up in full airplane mode, rejects incoming calls etc.

Do others have similar stories with this or other devices? Is there a way to specifically turn off cell only?

public static void setAirplaneMode(Context context, boolean status) { boolean isAM = Settings.System.getInt(context.getContentResolver(), Settings.System.AIRPLANE_MODE_ON, 0) != 0; String radios = Settings.System.getString(context.getContentResolver(), Settings.System.AIRPLANE_MODE_RADIOS); //This line is reporting all radios affected but annunciator does not seem to think so. Does not show airplane Wake.logger("Airplane mode is: " + isAM + " changing to " + status + " For radios: " + radios, false); // It appears Airplane mode should only be toggled. Don't reset to // current state. if (isAM && !status) { Settings.System.putInt(context.getContentResolver(), Settings.System.AIRPLANE_MODE_ON, 0); Intent intent = new Intent(Intent.ACTION_AIRPLANE_MODE_CHANGED); intent.putExtra("state", 0); context.sendBroadcast(intent); return; } if (!isAM && status) { Settings.System.putInt(context.getContentResolver(), Settings.System.AIRPLANE_MODE_ON, 1); Intent intent = new Intent(Intent.ACTION_AIRPLANE_MODE_CHANGED); intent.putExtra("state", 1); context.sendBroadcast(intent); return; } } 
Popularity: 14.0 Answer #7101488, count #1, created: 2011-08-18 01:42:04.0

Classic bit twister error. The extra data argument in the broadcast intent needed to be true/false, not 1/0. Ugh!!!

 intent.putExtra("state", true); //Not 1!! 

One phone worked another didn't. Now both do.

Title: Disable Linux Scheduler to meter power consumption of specific machine code instructions Id: 7098813, Count: 92 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-08-17 20:11:06.0 Body:

how can i run a programm in linux thats runs about 10000 times a shift instruction without it being interrupted by the scheduler? I want to do so in order to examine power consumption of the native shift instruction.

Popularity: 3.0 Answer #7099885, count #1, created: 2011-08-17 21:41:44.0

The scheduler is not going to interrupt your task unless something else needs to run. Hardware interrupts (e.g. timers) will happen though, and they interrupt it for a while, but not very long normally.

However, I'm really not sure how measuring the power of a particular instruction is relevant; modern CPUs don't really work like that - they don't ever run just one instruction at once, and no "real" program uses just a single type of instruction.

I don't think the impact of hardware interrupts is going to be very much, especially if you're running a "tickless" kernel (which is usually enabled by default on newer systems).

Title: Rails Minimizing Database Load Id: 7262546, Count: 93 Tags: Answers: 2 AcceptedAnswer: 7262851 Created: 2011-08-31 19:47:14.0 Body:

I am relatively new to rails. I understand that rails lets you play with your database values with much ease but I am a little bit in the blind about what kind of approach is more energy efficient on the database and which not.

Here is a case in point. I have a model appointment which belongs_to user. In my syntax I can sometimes say process_user @appointment.user. When I write that, does that run a separate SELECT query on the database to retrieve that user? Is it more efficient to write process_user @appointment.user_id where user_id is an attribute in the appointment and then try use the user_id value to perform my evaluation related tasks as long as I don't need the whole user object @appointment.user.

Frankly, from a peace of mind point of view, I just love to be able to use process_user @appointment.user because it reads better, looks nicer and works better when preparing logic. Is it a performance efficient way?

Popularity: 6.0 Answer #7262798, count #1, created: 2011-08-31 20:09:35.0

You can eagerly load your associated users with your Appointment models:

Appointment.all(:include => :user) 

...which will join in the users or do a separate lookup for all the associated users in a single query.

This will then load the user association in advance (eagerly) so the user attribute is already populated with the object when you reference it, instead of having to stop and execute a separate query to look it up one by one (N+1 queries).

Answer #7262851, count #2, created: 2011-08-31 20:14:33.0

You are perfectly fine with using code like process_user @appointment.user, as ActiveRecord tries its best to minimize the number of database queries. Of course it does not handle all situations perfectly, but your example is a very basic one. There would probably no immediate database query happen and the object would only be loaded when its attributes are accessed.

If you notice performance problems in a running large-scaled application and you can track the problems down to ActiveRecord using profiling, it is probably time to optimize. Trying to pre-optimize from the very beginning would be against Rails' philosophy and will only result in ugly (and possible even slower) code. Remember that the real performance bottlenecks are often at places where you would never expect them.

EDIT: As Winfield pointed out, optimizing the number of queries does usually not mean to manage foreign keys or similar internals by yourself. There are quite a number of flags and options for DB access methods that allow you to control how your database is queries.

Title: Is using hardware performance counters a good idea Id: 7312597, Count: 94 Tags: Answers: 2 AcceptedAnswer: 7312635 Created: 2011-09-05 20:47:14.0 Body:

Say I want to implement a software that uses hardware performance counters such as those for counting retired stores. Note that alternative solutions without the performance counters are possible but might be relatively a little inefficient. However, I can sacrifice performance a bit for portability and power efficiency. Also, note that the performance counters will be kept on the whole time.

How good hardware performance counters are in consuming power. Secondly, Are there popular platforms or processors, single or multicore, which don't have performance counters. If so, could you kindly name them.

Popularity: 7.0 Answer #7312620, count #1, created: 2011-09-05 20:49:45.0

It's a some time ago since I used performance counter but maybe PAPI can help you with some stuff.

Answer #7312635, count #2, created: 2011-09-05 20:51:15.0

PAPI is a common profiler program that allows you to compile into the code and access the hardwrae counters. From my own experience, it doesn't have a noticable effect on performance.

Although I don't know for sure, I would assume that it will not increase power consumption because hardware counters are always enabled in the hardware. It's just a matter of reading them.

As far as I know, I'm not aware of any modern non-embedded processors that don't have performance counters. I may wrong. Someone care to correct me?

Title: Android webkit and battery consumption Id: 7327632, Count: 95 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-09-07 00:24:22.0 Body:

I´ve been working on a android webkit application, which has a considerably sophisticated UI (plenty of icons, CSS, JS and HTML5 pages). However, the application drains device battery. I installed some tools to measure power consumption and it´s pretty clear that the Android webkit (not the application engine) really demands CPU to render HTML content and it also impacts on battery consumption (oh! and memory as well). I´d like to know if anyone has ever had any problem with webkit vs. performance (power and CPU consumption). In addition, is there any Android webkit component (instead of WebView) with better performance?

--Raul

Popularity: 6.0 Answer #7327670, count #1, created: 2011-09-07 00:31:07.0

I do find that the browser does consume a high amount of CPU but only during rendering. Once the page is loaded usage drops to 0.00%.

Have you run top on the device and made sure this is where the CPU usage is coming from?

Answer #7331058, count #2, created: 2011-09-07 08:44:30.0

It's very smart to optimize whatever your WebView loads to be friendly. A mobile device is not a small laptop. If your JavaScript, especially with a framework like jQuery, does a lot of animations, fade effects or if you keep pinging with asynchronous connections, your device will drain battery.

If you need to use WebView extensively, make sure to balance JavaScript actions (which use CPU) based on user activity (if user is active, have the actions running more often, if not, then don't keep certain actions in a loop). It's also smart to optimize your CSS and web in general so that least amount of processing power is required to load the page. Even compressing your Javascript, CSS and returned HTML is a good way to go about it.

Title: Battery is low. Charging current not enough...Is there intent before this message is shown? Id: 7404185, Count: 96 Tags: Answers: 2 AcceptedAnswer: 7404482 Created: 2011-09-13 15:09:56.0 Body:

I have message on my device, and it says the following:

Battery is low. Charging current not enough for device power consumption. Please switch to AC adapter.

Is there any intent fired before this message is shown?

And how do they calculate if this message will be shown or not?

And the thing I do not understand is how the power supply is not enough?

It is so weird -- my phone is plugged in to my PC, and instead of being charged, the battery goes empty

NOTE: The message is shown only in the case where the device is plugged in to a USB port of the PC.

I had an htc desire for a year, and this message was never shown; but now with the sensation this message occurs very often.

Does anyone know how can I catch this intent -- if it is fired at all?

Screen Capture: Battery is low. Charging current not enough for device power consumption. Please switch to AC adapter.

Popularity: 66.0 Answer #7404264, count #1, created: 2011-09-13 15:14:31.0

Some devices (tablets?) consume more power than what can be provided via USB port.

You could try to detect this by detecting both BatteryManager.BATTERY_PLUGGED_USB and BATTERY_STATUS_DISCHARGING.

Battery low can be detected by registering to ACTION_BATTERY_LOW broadcast.

Answer #7404482, count #2, created: 2011-09-13 15:29:32.0

As far as I know there is no explicit intent.You can try and listen for the sticky BATTERY_CHANGED broadcast intent. This should contain at least the BATTERY_PLUGGED_USB info from the BatteryManager, probably in combination with BATTERY_STATUS_DISCHARGING.

This happens because the usb-ports of a computer have a limited current, for USB 2.0 ports its either 500 mA (Hi-power) or 100 mA (low-power¹). Thats not enough to power a device with a big screen and a decent cpu along other hardware (e.g. GPS is expensive currentwise). The normal, dedicated chargers usually provide around 1000 mA (1A).

This occurs with the sensation more often (it happens with my desire also sometimes) since it has a bigger screen (4.3" vs 3.7" of the desire) and a faster CPU. The screen is also a LCD, some of the Desire models have an OLED display instead (maybe you got one of these). This OLED drains way less battery while displaying dark, blueish content. If you got a lot of white/towards red content instead, the LCD consumes less (when comparing similar sizes of course) - so this might also be a factor.

You can try to avoid this message by turning off the screen and sending the device to standby for a few minutes. This should charge it at least a bit, since the power consumption is way smaller.

¹ Low power: That's the case when you have your device on an usb-hub without it's own dedicated power supply.

Title: what is the different of busy loop with Sleep(0) and pause instruction? Id: 7488196, Count: 97 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-09-20 16:02:38.0 Body:

i would like to wait on an event in my app which supposed to happen immediately, so i don't want to put my thread on wait and wake it up later; i wonder what are the difference between using Sleep(0) and hardware pause instruction.

i cannot see any differences of cpu utilization for the following program. my question isn't about power saving considerations.

#include <iostream> using namespace std; #include <windows.h> bool t = false; int main() { while(t == false) { __asm { pause } ; //Sleep(0); } } 
Popularity: 6.0 Answer #7488352, count #1, created: 2011-09-20 16:13:10.0

Sleep is a system call, which allows the OS to reschedule the CPU time to any other process, if available, before allowing the caller to continue (even if the parameter is 0).

__asm {pause}; is not portable.

Well, Sleep is neither, but not on the CPU level but on the system libraries level.

Title: Obstacle avoidance using 2 fixed cameras on a robot Id: 7541489, Count: 98 Tags: Answers: 2 AcceptedAnswer: null Created: 2011-09-24 19:50:44.0 Body:

I will be start working on a robotics project which involves a mobile robot that has mounted 2 cameras (1.3 MP) fixed at a distance of 0.5m in between.I also have a few ultrasonic sensors, but they have only a 10 metter range and my enviroment is rather large (as an example, take a large warehouse with many pillars, boxes, walls .etc) .My main task is to identify obstacles and also find a roughly "best" route that the robot must take in order to navigate in a "rough" enviroment (the ground floor is not smooth at all). All the image processing is not made on the robot, but on a computer with NVIDIA GT425 2Gb Ram.

My questions are :

  1. Should I mount the cameras on a rotative suport, so that they take pictures on a wider angle?

  2. It is posible creating a reasonable 3D reconstruction based on only 2 views at such a small distance in between? If so, to what degree I can use this for obstacle avoidance and a best route construction?

  3. If a roughly accurate 3D representation of the enviroment can be made, how can it be used as creating a map of the enviroment? (Consider the following example: the robot must sweep an fairly large area and it would be energy efficient if it would not go through the same place (or course) twice;however when a 3D reconstruction is made from one direction, how can it tell if it has already been there if it comes from the opposite direction )

I have found this response on a similar question , but I am still concerned with the accuracy of 3D reconstruction (for example a couple of boxes situated at 100m considering the small resolution and distance between the cameras).

I am just starting gathering information for this project, so if you haved worked on something similar please give me some guidelines (and some links:D) on how should I approach this specific task.

Thanks in advance, Tamash

Popularity: 14.0 Answer #7572051, count #1, created: 2011-09-27 15:53:39.0

If you want to do obstacle avoidance, it is probably easiest to use the ultrasonic sensors. If the robot is moving at speeds suitable for a human environment then their range of 10m gives you ample time to stop the robot. Keep in mind that no system will guarantee that you don't accidentally hit something.

(2) It is posible creating a reasonable 3D reconstruction based on only 2 views at such a small distance in between? If so, to what degree I can use this for obstacle avoidance and a best route construction?

Yes, this is possible. Have a look at ROS and their vSLAM. http://www.ros.org/wiki/vslam and http://www.ros.org/wiki/slam_gmapping would be two of many possible resources.

however when a 3D reconstruction is made from one direction, how can it tell if it has already been there if it comes from the opposite direction

Well, you are trying to find your position given a measurement and a map. That should be possible, and it wouldn't matter from which direction the map was created. However, there is the loop closure problem. Because you are creating a 3D map at the same time as you are trying to find your way around, you don't know whether you are at a new place or at a place you have seen before.

CONCLUSION This is a difficult task!

Actually, it's more than one. First you have simple obstacle avoidance (i.e. Don't drive into things.). Then you want to do simultaneous localisation and mapping (SLAM, read Wikipedia on that) and finally you want to do path planning (i.e. sweeping the floor without covering area twice).

I hope that helps?

Answer #12335551, count #2, created: 2012-09-09 00:21:39.0
  1. I'd say no if you mean each eye rotating independently. You won't get the accuracy you need to do the stereo correspondence and make calibration a nightmare. But if you want the whole "head" of the robot to pivot, then that may be doable. But you should have some good encoders on the joints.

  2. If you use ROS, there are some tools which help you turn the two stereo images into a 3d point cloud. http://www.ros.org/wiki/stereo_image_proc. There is a tradeoff between your baseline (the distance between the cameras) and your resolution at different ranges. large baseline = greater resolution at large distances, but it also has a large minimum distance. I don't think i would expect more than a few centimeters of accuracy from a static stereo rig. and this accuracy only gets worse when you compound there robot's location uncertainty.

    2.5. for mapping and obstacle avoidance the first thing i would try to do is segment out the ground plane. the ground plane goes to mapping, and everything above is an obstacle. check out PCL for some point cloud operating functions: http://pointclouds.org/

  3. if you can't simply put a planar laser on the robot like a SICK or Hokuyo, then i might try to convert the 3d point cloud into a pseudo-laser-scan then use some off the shelf SLAM instead of trying to do visual slam. i think you'll have better results.

Other thoughts: now that the Microsoft Kinect has been released, it is usually easier (and cheaper) to simply use that to get a 3d point cloud instead of doing actual stereo.

This project sounds a lot like the DARPA LAGR program. (learning applied to ground robots). That program is over, but you may be able to track down papers published from it.

Title: Why does increasing timer resolution via timeBeginPeriod impact power consumption? Id: 7590475, Count: 99 Tags: Answers: 2 AcceptedAnswer: 7590614 Created: 2011-09-28 22:36:02.0 Body:

I am currently writing an application in C# where I need to fire a timer approx. every 5 milliseconds. From some research it appears the best way to do this involves p/invoking timeBeginPeriod(...) to change the resolution of the system timer. It works well enough in my sample code.

I found an interesting warning about using this function on Larry Osterman's MSDN Blog in this entry:

Adam: calling timeBeginPeriod increases the accuracy of GetTickCount as well.

using timeBeginPeriod is a hideously bad idea in general - we've been actively removing all of the uses of it in Windows because of the power consumption consequences associated with using it.

There are better ways of ensuring that your thread runs in a timely fashion.

Does anyone know exactly why this occurs, or what those "better ways" (which are unspecified in the thread) might be? How much extra power draw are we talking about?

Popularity: 29.0 Answer #7590614, count #1, created: 2011-09-28 22:54:58.0

Because it causes more CPU usage. A good explanation is at Timers, Timer Resolution, and Development of Efficient Code.

Answer #7591340, count #2, created: 2011-09-29 01:02:10.0

Changing the system timer resolution does impact on power usage, mainly because lots of developers do not understand windows timers. You see lots of code with sleep or timer values less than 15ms. It also changes the behaviour of system tasks which can result in more power usage.

Changing the system timer to 1ms suddenly all this code that was only waking up every 15ms starts to wake up much more often, and the CPU usage goes up.

However from the users perspective the programs that have misused the timers can become more responsive, even the OS in the case of WinXP, so there is a trade off.

I have a small program that changes the system timer so you can experiment and test the power usage for yourself. There are also a number of links and some more background at http://www.lucashale.com/timer-resolution/

Title: iPhone 4 profile power consumption (with instruments) Id: 7715148, Count: 100 Tags: Answers: 1 AcceptedAnswer: 7795536 Created: 2011-10-10 15:29:47.0 Body:

I have an app that I added a lot of animation to. The app also used "iPhone sleep preventer" to play silent audio. Since then, I noticed that the battery consumption increased by up to 4 times! I'd like to find a method to profile the power consumption (I think I saw an option in Instruments) to find and eliminate the offending method(s).

Where would I start looking for information like this? Currently I have the phone left on the desk for ~3 hours to record power drain over time. Is there a better method to predict when the app will run out of power if running my app continuously?

An extra side question: are the % of battery left displayed in the status bar linear or is there some non-linearity towards the end of the battery life?

Edit: I found a "power" preset in xcode>product>profile>CPU>Energy diagnostics. It doesn't seem to work perfectly, as the power consumption level is always 0/20. But it does tell me how much of the CPU time is spent on app foreground, graphics and music!

Now I dont know how the CPU power is managed, is running the CPU at 75% more power consuming than lets say 30%? Intuitively it feels like it should...

Thank you!

Popularity: 75.0 Answer #7795536, count #1, created: 2011-10-17 14:50:55.0

I'm no expert. Im fact I am only starting to power profile a iphone today, and looked upon your question here in hope off learning.

So I will share with what I've found in meanwhile. On IOS Developer Library I have found the following:

  1. Connect the device to your development system.
  2. Launch Xcode or Instruments.
  3. On the device, choose Settings > Developer and turn on power logging.
  4. Disconnect the device and perform the desired tests.
  5. Reconnect the device.
  6. In Instruments, open the Energy Diagnostics template.
  7. Choose File > Import Energy Diagnostics from Device.

And you have a report of Cpu and energy during the time of the log. You can find this steps and many more info on this section of the IOS Dev. lib.

I am still a bit fresh on this matter, so if you find anything that you think is meaningful please post that info here.

Edit: The apple dev lib suffered some changes. Updated link

Title: Power Consumption of an Application Id: 7784585, Count: 101 Tags: Answers: 3 AcceptedAnswer: null Created: 2011-10-16 13:30:56.0 Body:

Is there any way to find out the power consumed by an application. Like if i have some ten user apps running on my laptop and would like to know how much power each application is consuming in Linux environment?

Popularity: 8.0 Answer #7785810, count #1, created: 2011-10-16 16:54:42.0

That's an interesting question and does not have a easy answer that I've heard of.

Presuming that you have a way of metering the minute to minute consumption of the machine. You can get a crude approximation by examining the amount of CPU time used. Either by watching things in top, or by examining the output of time (1). Compare the machine's total power consumption in various states of idleness and load to the amount of work done by each process---with enough statistics you should have a solvable system...possibly even a over-constrained one that calls for some kind of best-fit solution.


The only way that occurs to me to do it to high precision would be to use

  1. Instrumented virtual machine that accumulated statistics on what parts of the CPU were activated. (Do such things exist at this time?!?)
  2. The manufacturers documentation for the chip-n-board you are running on to total up the power implied.

which would be a horribly complicated mess.

Sorting out which bits were needed just to provide the environment and which could be unambiguously attributed to the program won't be easy.


I have to ask...why?

Answer #7787176, count #2, created: 2011-10-16 20:41:34.0

The PowerTop tool might have something for you. Lookup the section "Power usage". If the tool itself is not what you want, you can research, where the tool retrieves its information and evaluate them in the way you want.

Answer #7787213, count #3, created: 2011-10-16 20:48:02.0

I don't know if there's really a "good way" to do this. But here's a suggestion for a generic approach that would work regardless of operating system: Remove the battery from your laptop, and hook up its power adapter to a high-precision current meter. Note the draw when no "normal" applications are running. Then run each application on its own and note the differences in current draw.

Title: How to control the Screen Power Saver mode? Id: 7835315, Count: 102 Tags: Answers: 2 AcceptedAnswer: 7864373 Created: 2011-10-20 11:34:17.0 Body:

In my application i want add this feature .

After open the application before closing the application i don't want allow to go into the power saver mode of the device. How can i manage this. even user can't doing anything on the screen (even it is in idle). after closing my application allow to get power saver mode.

Edit i got the solution for this Query..

Popularity: 9.0 Answer #7858351, count #1, created: 2011-10-22 08:36:00.0

add this code

getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON); 
Answer #7864373, count #2, created: 2011-10-23 04:35:20.0

Use this keyword for your layout:

FLAG_KEEP_SCREEN_ON 
Title: how much battery power consumption is allowable for an iphone app? Id: 7835843, Count: 103 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-10-20 12:21:42.0 Body:

how much battery power consumption is allowable for an iphone application ? My application cpu activity is around 80% will they accept my application in app store could any please tell me about what would be acceptable memory usage,cpu activity ,power consumption for an iphone application

Popularity: 4.0 Answer #7835966, count #1, created: 2011-10-20 12:33:16.0

iOS App Developer library says that :

Apps Must Be Tuned for Performance 

For iOS apps, performance means more than just writing fast code. It often means writing better code so that your user interface remains responsive to user input, your app does not degrade battery life significantly, and your app does not impact other system resources.

I suggest you to read this : Performance considerations for iOS App It clearly says that Power consumption on mobile devices is always an issue.see the link for details.

Cheers!

Title: Manually pairing Bluetooth Decives in Android? Id: 7900386, Count: 104 Tags: Answers: 1 AcceptedAnswer: 7909377 Created: 2011-10-26 08:36:00.0 Body:

I was reading this http://developer.android.com/guide/topics/wireless/bluetooth.html#QueryingPairedDevices

which is allot of help on how to pair,connect to a bluetooth device.

I have a situation where I have several BT devices that are in Non-Discover mode always. I know the MAC and the PIN of these devices. IS there a way in Android Dev to manually add devices to the PAIRED list so i can just use the connect as a client. I understand this maual is written allot for V3. i think i will need to do this on 2.0 ; 2.1- has anybody done this before?

Basically these devices I want to connect to are power saving modules I used pre built BT modules to monitor daylight, another one humidity, etc.. every 3hrs or when interrupted and runs of a single battery for months. So turning off divcory on server saves immense power and prevents other people trying to connect and waste battery.

Popularity: 22.0 Answer #7909377, count #1, created: 2011-10-26 21:51:42.0

Not sure what you mean by "manually": Do you mean "manually" as in GUI/user interaction, or "manually" as "I do it in my own application code"?

Some suggestions though:

If you can make your BT devices discoverable at all, you could do it this way:

  1. Make your BT device discoverable
  2. Let Android search for and find the device and then initiate a connection
  3. Android will ask for the PIN for pairing with the device; enter the PIN.
  4. Once pairing was successful, Android stores the pairing information for future use, so that you can
  5. Make your BT device invisible again.

From then on your app should be able to connect to the BT device at any time without further pairing operations.

If the said is not an option for you, maybe you want to go another way:

In current Android versions there are different API routines implemented which are neither documented nor exposed in the normal SDK. A hack kind of solution may be to use some of these "hidden" ("@hide"...) APIs, either via reflection or via modification of your SDK installation.

But be aware that this is always a hack and it may work on a specific device with a specific version of Android and is likely to break your app on another device and/or any other Android version.

Having said that, here comes some reference:

Example of how to access "hidden" bluetooth API.

Then, have a look at the source code for android.bluetooth.BluetoothDevice, e.g. here.

In there, public boolean createBond(){...} may do what you want.

Title: How to estimate power consumption of an Android app? Is it linear? Id: 8074811, Count: 105 Tags: Answers: 2 AcceptedAnswer: 8074844 Created: 2011-11-10 03:54:18.0 Body:

I do a few of experiments and find I can't estimate power consumption of an app.

e.g.: I find it is 100mW when I just run my app, and it is 20mW when I do nothing. I think 80mW is consumed by my app. But it is 200mW I run another app B and my app also run, and it is 160mW when I just run app B, so my app also consume 40mW? Which one is correct?

In my eyes, it relate to CPU load rate, or something other I don't know. So we can not estimate power consumption by subtraction because it's not linear. So I want to know how to estimate power consumption correctly?

Any advice is welcome.

Popularity: 17.0 Answer #8074844, count #1, created: 2011-11-10 03:58:59.0

I'm not sure how fine grained up power consumption you're looking for, but I can tell you a good rule of thumb for keeping power usage to a minimum: only use the radios when necessary. The bulk of power usage from your app is going to be from using the GPS and network radios. If you can keep those to a minimum, you're app is just going to take dainty little sips of power like it's drinking tea at a cricket match.

Answer #8074872, count #2, created: 2011-11-10 04:03:22.0

Have a look at PowerTutor

PowerTutor is an application for Google phones that displays the power consumed by major system components such as CPU, network interface, display, and GPS receiver and different applications

enter image description here

enter image description here

Title: How do I use Android PowerProfile private API? Id: 8100506, Count: 106 Tags: Answers: 2 AcceptedAnswer: 8490020 Created: 2011-11-11 22:00:26.0 Body:

I downloaded the source code from the below link and added to my project.

http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.2_r1.1/com/android/internal/os/PowerProfile.java

I am getting and it can not find R file shown below.

 int id = com.android.internal.R.xml.power_profile; 

Also can not import

import com.android.internal.util.XmlUtils; 

I basically want to measure the power consumption of Android devices.

Thanks in advance.

Popularity: 17.0 Answer #8490020, count #1, created: 2011-12-13 13:26:20.0

Yes you can access the internal API that is com.android.internal.os.PowerProfile

just take a look at this link, and follow the step by step process.

Answer #16276198, count #2, created: 2013-04-29 10:18:37.0

You could use

int id = Resources.getSystem().getIdentifier("power_profile", "xml", "android"); 

But be aware of what FoamyGuy commented.

Title: How do I get a power consumption from Linux kernel Id: 8102396, Count: 107 Tags: Answers: null AcceptedAnswer: null Created: 2011-11-12 03:42:35.0 Body:

I was wondering if there is a linux command I can get the power usage from Linux kernel. For example, I could get a CPU usage from proc/stat. I was wondering if there might be a similar command that I could get a power consumption or I could calculate the power consumption from some numbers.

I basically want to measure the power consumption of Android device. I know I could get the battery level of an Android device but I want to measure POWER consumption not just battery level.

I tried the PowerProfile, a private API, but I was not able to run it on my project.

Thanks in advance.

Popularity: 4.0 Title: Did anyone find that sensor sampling rate can be lower again after the phone was quiet a period in Android? Id: 8134564, Count: 108 Tags: Answers: null AcceptedAnswer: null Created: 2011-11-15 10:06:46.0 Body:

In Android, the sensor sampling rate will be lower when the phone on the desk is quiet. Not only be lower,but also it will be lower again! enter image description here

This figure is power consumption of sensor. My phone is moto XT701,and at this time, sensor sampling rate is 42Hz normally( it means I am shaking. such as t1,t6 ).

T1: I am shaking phone (run status)

T2: put the phone on the desk and keep quiet, at this time sensor sampling rate is lower. (idle status)

T3: the phone is still on the desk but sensor doesn’t read data any more. (suspend status)

T4: I shake phone again, but you can see the sensor wake up at about 62s.

T5: put the phone on the desk again.

T6: I shake phone again.

T7: phone on the desk.

T8: phone still on the desk same as T3.

You guys can see, CPU may be asleep during T3, and I wake up it hardly.

My question is: how does it go from idle status to suspend status? According to above figure, you can't say it will go to suspend status when idle last a period( because T2 <> T7 ). So I want to know how does it work in CPU scheduling. It's better to list some source code. You can also give your guess.

Any advice is welcome.

Popularity: 3.0 Title: Use Android phone as ARM dev board? Id: 8328981, Count: 109 Tags: Answers: 2 AcceptedAnswer: 8343082 Created: 2011-11-30 16:03:00.0 Body:

Is it possible to have a similar peripheral control with Android then we have when making device drivers?

I'm searching for a way to turn off all peripherals on a Android phone (e.g. Display, Wi-Fi, GPS) to put the phone on a power saving mode (1-5 mA).

Basically, I want to develop an application that wake up from time to time, get coordinates, exchange information with a server (Wi-Fi or GSM) and then go to sleep, no need for display.

Is that possible using a regular android phone?

Popularity: 13.0 Answer #8343082, count #1, created: 2011-12-01 14:40:05.0

You can only program ARM on Android in User Mode. You don't have access to such instructions as MRC in ARM assembly, which require Privileged Modes. Perhaps there are custom ROMs you can install onto your phone to do that, but I doubt it.

This web site has the closest description to what you're looking for.

Answer #8343145, count #2, created: 2011-12-01 14:44:26.0

I would recommend asking on CyanogenMod dev forums, or other such Android hacking sites and see what they have to say. I would not be surprised if the boot loader or something can be convinced to sleep for a long time and then wake the device up on a timer, sort of like an alarm clock. (I recall some phones used to have this functionality, to wake up on a timer.)

Title: In need of a Linux memory-utilization tool Id: 8377163, Count: 110 Tags: Answers: 3 AcceptedAnswer: null Created: 2011-12-04 17:32:12.0 Body:

I'm working on a thesis right now where I have to measure the usage and power consumption of VM's.

To do so, I create a VM, then log into it and start a lookbusy process which utilizes the allocated memory to the max.

However, I noticed that the real memory-usage (of the host system) starts dropping after a couple minutes of VM-utilization.

When I log back in into the VM it shows full utilization, though.

Let's say, my VM has 2GB assigned. When I start the utilization, the VM as well as the Host show both that 2GB are under load.

After a short while however, the hosts' memory starts decreasing and stops at about 400MB although the VM still is working at the max.

I assume it has something to do with the usage of only the needed mem-pages, instead of the whole allocated memory.

This is why I now need your help - I need a tool that would let me utilize the allocated memory,

but also keep the real host's memory utilized as well, in order for me to measure the power consumption of the host under such load.

Lookbusy in fact would do its work, if the memory usage wouldn't start dropping after 1-2 minutes. The measurements need to last for days!

Popularity: 6.0 Answer #8377338, count #1, created: 2011-12-04 17:54:52.0

If you want to measure memory consumption of a particular process (of process id 1234 for instance), the /proc/1234/ directory is relevant (or /proc/self/ from inside the process itself). In particular the stat, statm, status and maps pseudo-files there. For instance, cat /proc/self/maps or cat /proc/self/status gives you information about the cat process itself.

I'm not sure to understand what you mean by "measuring the VM"; AFAIK virtual machines like e.g. Qemu eat their specified memory.

I am also curious of how you measure power consumption.

Answer #8378668, count #2, created: 2011-12-04 21:17:46.0

You didn't offer many details... but, depending on the usage, you may be seeing the effects of ksm (kernel samepage merge). Check to see if ksm is enabled.

Answer #8378702, count #3, created: 2011-12-04 21:23:18.0

"Balloon memory" can also be an issue, for example with VMWare:

http://www.virtualinsanity.com/index.php/2010/02/19/performance-troubleshooting-vmware-vsphere-memory/

PS: If you want to GENERATE excessive memory use, and if your VM (guest OS) is Linux, then you can always use "memtester":

http://linux.die.net/man/8/memtester

Title: Power consumption measurement for WiFi Id: 8380906, Count: 111 Tags: Answers: 3 AcceptedAnswer: null Created: 2011-12-05 04:06:36.0 Body:

I want to measure how much battery is consumed while switching on Wifi, scanning and connecting to a an AP. I tried doing it with the API to see battery level. But that API gives battery level on a scale of 100 which is not granular enough to find out the power consumed in turning on the Wifi and scanning only once.

Is there a way out to measure this?

Popularity: 12.0 Answer #8381357, count #1, created: 2011-12-05 05:28:34.0

You should read the value in /sys/class/power_supply/battery/batt_current.

That's the current consumption of the phone in mA. It's updated by the OS every minute or so.

Answer #8381498, count #2, created: 2011-12-05 05:52:47.0

I don't think there is any such API to do this for WiFi. You'll have to use alternatives :

Answer #13223135, count #3, created: 2012-11-04 21:31:10.0

Check out Powertutor. It is in my experience the best application out there to measure power consumption. It is not 100% accurate but is good enough for estimations. You can categorize by 3G, CPU, Wifi.. etc.

Title: GPS Windows Mobile 6 (using Microsoft.WindowsMobile.Samples.Location;) Id: 8523049, Count: 112 Tags: Answers: 1 AcceptedAnswer: 8526685 Created: 2011-12-15 16:08:49.0 Body:

well guys, i am part of one team. (I dont have the project yet, i am new).

They did a application using gps, problem is it fail sometimes... why? they think gps fails because users has the device in "energy saving" mode; then the device hibernates after 5 minutes if they dont use it.

GPS brings sometimes bad coordinates (for example coordinate shows users is on "SEA" or in "Japan". I repeat, My partners thinks problem is because device is in "energy saving" mode. how can i change this configuration with C# while application is running (maybe back old configuration when application has been closed).

I am using this library.

using Microsoft.WindowsMobile.Samples.Location; 

I can't check if it works because now i dont have a device, and i my computer doesn't have GPS, do you have any idea for i can check the application?. i am using

 `"Windows Mobile 6.0 SDK"` 
Popularity: 18.0 Answer #8526685, count #1, created: 2011-12-15 21:13:46.0

You can install networking functionality while using the Microsoft Device Emulator. This blog explains how to setup network functionality on Windows 7. Follow the official documentation if you're not using Windows 7.

You can emulate GPS functionality using the FakeGPS program supplied by Microsoft. If you want anything close to real data you'll need a text file containing raw NMEA to feed into FakeGPS. You can simply record bytes passed through the GPS COM port to a file to generate a NMEA file. If fake GPS doesn't like your file then remove the non-standard NMEA lines and try again.

You should do some research on Windows Mobile Power Management first to understand the problem. The quickest hack is to simply call SystemIdleTimerReset() more frequently than the battery idle timeout (use SystemParametersInfo() and SPI_GETBATTERYIDLETIMEOUT) to prevent the device from sleeping. This will decrease the battery life of the device! There are other more elegant solutions available such as using the Power Management API.

Title: C2DM Push chat application Id: 8552417, Count: 113 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-12-18 14:38:55.0 Body:

Currenlty I am trying to develop a simple chat application for Android. I want to create something that works like WhatsApp. Because it should be realtime, I think C2DM is the best way to send a notification when a user sends a message to another user.

Because I have read that C2DM is build for noticiations only, and not for messages I have to find a way to deliver those messages to that other contact.

Now, I have a application server, that can send C2DM notifications. But what's the best and the most energy saving way to send and receive the 'textmessages'?

I have read about polling, but it's not that energy saving I think. I have read something about 'XMPP', but how can I combine that with C2DM?

Popularity: 14.0 Answer #8552731, count #1, created: 2011-12-18 15:28:42.0
  1. User A sends message to User B, that is, it sends a message to your application server.
  2. You app server receives a message from A to B. It sends a C2DM notification to B telling that there is new data.
  3. User B receives the C2DM notification of new data, connects to your app server and retrieves the message from User A.

This mechanism only pushes data, there is no polling.

Title: Power measurement for Nexus S Id: 8558964, Count: 114 Tags: Answers: 1 AcceptedAnswer: null Created: 2011-12-19 08:55:36.0 Body:

I am looking to measure power consumed by an application running on NEXUS S.

I could read the current VOLTAGE value from the path /sys/class/power_supply/battery/voltage_now

However I also wanted to measure the current value but I could not find a file which displayed current value. As in other phones there is file names current_now or battery_charge

Is there any other way to measure power ? Is there any way that I can compile another kernel/ROM on this phone which can show me the current drawn from the battery ? If yes. Then which kernel should I compile ?

Popularity: 6.0 Answer #8621431, count #1, created: 2011-12-23 23:10:30.0

You could take a look at PowerTutor by guys at UMich
Here is an article by IEEE about PowerTutor.
Although not a direct answer, this should assist you in your problem

Title: android turn GPS On & Off Programmatically in 2.2? Id: 8600731, Count: 115 Tags: Answers: 3 AcceptedAnswer: 8601574 Created: 2011-12-22 07:39:56.0 Body:

i am using GPS in My application and i want to turn GPS on and off Programmatically to save power how can i do it :(

this is for turn off i need the turn on please

private void turnGPSOnOff(){ String provider = Settings.Secure.getString(getContentResolver(), Settings.Secure.LOCATION_PROVIDERS_ALLOWED); if(!provider.contains("gps")){ final Intent poke = new Intent(); poke.setClassName("com.android.settings", "com.android.settings.widget.SettingsAppWidgetProvider"); poke.addCategory(Intent.CATEGORY_ALTERNATIVE); poke.setData(Uri.parse("3")); sendBroadcast(poke); //Toast.makeText(this, "Your GPS is Enabled",Toast.LENGTH_SHORT).show(); } } 
Popularity: 63.0 Answer #8601574, count #1, created: 2011-12-22 09:08:59.0

You can turn on the gps using

LocationManager locationManager = (LocationManager)getSystemService(Context.LOCATION_SERVICE); LocationListener locationListener = new CTLocationListener(); locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 1.0f, locationListener); 

and turn it off using

locationManager.removeUpdates(locationListener); 

Or you might also find this another thread on gps of use: Enable GPS programatically like Tasker

Answer #8655582, count #2, created: 2011-12-28 11:54:26.0
private void turnGPSOnOn(){ String provider = Settings.Secure.getString(getContentResolver(), Settings.Secure.LOCATION_PROVIDERS_ALLOWED); if(provider.contains("gps")){ // for turn on final Intent poke = new Intent(); poke.setClassName("com.android.settings", "com.android.settings.widget.SettingsAppWidgetProvider"); poke.addCategory(Intent.CATEGORY_ALTERNATIVE); poke.setData(Uri.parse("3")); sendBroadcast(poke); Toast.makeText(this, "Your GPS is Enabled",Toast.LENGTH_SHORT).show(); } } 
Answer #14029546, count #3, created: 2012-12-25 09:52:14.0
 try { dataMtd = ConnectivityManager.class.getDeclaredMethod("setMobileDataEnabled", boolean.class); } catch (SecurityException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (NoSuchMethodException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } dataMtd.setAccessible(true); try { dataMtd.invoke(conm,true); //gprDisable(); } catch (IllegalArgumentException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IllegalAccessException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InvocationTargetException e) { // TODO Auto-generated catch block e.printStackTrace(); } 

and if you are making the options to false, you can disable the GPS. Hope this may help you.

Title: Is there any use case which keeps Early-Suspend for a long time? Id: 8738269, Count: 116 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-01-05 05:56:02.0 Body:

I know that in music playback, an Android device can keep early-suspend state for a long time (LCD off but not going down to suspend, since drivers are still working) but:

  • is there any other use case keeping the state for a long time? (perhaps like downloading data in background?)
  • if so, could you suggest any specific apk doing that?

I am curious about this in terms of power consumption of the device. If there are quite a few cases, the power consumption should be improved specifically in that state.

Popularity: 5.0 Answer #8878089, count #1, created: 2012-01-16 09:41:12.0

I am also curious to know this. Even there are no active application running in background, there are one or the other wake locks on hold preventing SUSPEND.... EARLY SUSPEND will stay for ever i guess

Title: Decentralized Clustering library for Java Id: 8782922, Count: 117 Tags: Answers: 1 AcceptedAnswer: 11851184 Created: 2012-01-09 01:07:31.0 Body:

I am trying to develop a network safe decentralized replication based distributed system. I am looking for a java library with following requirements:

  1. Library should be able to initialize the n nodes in a decentralized fashion (no master or slave). It should be able to recover from network failure at start itself. For example: I try to initiate a network with 5 nodes, but only 3 get started.

  2. Once initialized it should be able to detect the node loss, notify the user so user can take some remedial steps on application front and recover from it. I am not concerned about any new node or failed node joining the cluster back again. But if it supports that too, it is good.

  3. It should allow P2P communication. If it can support efficient P2P and multicast both that is very good.

  4. Allow sending Runnable message and serializable objects between nodes as in Aleph. Alepha is good one, it does not support the node failure/recovery.

Basically I will be creating the dynamic quorums of nodes based on list of active nodes and replicating the objects on different quorums. My frame-work will allow the users to talk these quorums and access the objects. In case of node failure I need to rebuild quorum with new list of active nodes. I want to concentrate on quorum algorithm and save energy on networking capability. Please suggest some suitable library for this purpose. If you know any similar quorum solution too, kindly refer that too.

Popularity: 4.0 Answer #11851184, count #1, created: 2012-08-07 17:36:11.0

As no one answered, I am just updating the solution I used: JGroups. It is great for cluster based multi casting.

Title: Power consumption for various colors in android Id: 8937162, Count: 118 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-01-20 05:33:50.0 Body:

I wanted to measure the battery consumption of various colors in the range of 0-255 in android. Wanted to do it through an application. Currently I am using the PowerManager to measure the initial level of Battery and then keeping the screen bright for say 10-20 mins and check the final battery level, the difference giving me the usage in %. But I am getting weird results as in "white" uses same power as "black" (both having a drop of 4%). So I think my appraoch may be wrong. Can someone please suggest me to appraoch the problem in a correct way. Please help !!

Popularity: 4.0 Answer #8937183, count #1, created: 2012-01-20 05:37:18.0

I dont know what device you are using, but usually what really consumes the battery is the backlight, while the color is disposable in terms of power consumption.

Answer #16111308, count #2, created: 2013-04-19 18:31:38.0

(disclaimer: I co-founded the company that built the below product)

Try Little Eye from Little Eye Labs, which lets you track an individual apps power consumption and breaks it down into its various hardware usage, including display, CPU and wifi (and shortly GPS). Based on the pixel color and the brightness levels used, the power consumption trend of the app will vary, and Little Eye helps you visualize that quite easily. Note its a desktop product (which connects to your phone) that you need to download and install from here

Title: Is it possible to modify WiFi frames (layer 2 PDUs) to include new fields? Id: 8981462, Count: 119 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-01-24 02:51:01.0 Body:

I want to develop an app that makes use of the WiFi interface to establish cooperation amongst a subset of mobile devices, which will then allow me to exploit location information and achieve higher energy efficiency (cluster based communications).

For security reasons, I must append a digital signature (or a keyed hash) at the end of specific WiFi frames (e.g. when ARP protocol runs).

  • Is it possible to achieve this in Android OS?
  • Will I be able to update the WiFi protocol stack in Android?
  • Will it be feasible?
  • Any literature suggestions?

I'd be grateful for any directions.

Popularity: 5.0 Answer #10319533, count #1, created: 2012-04-25 16:10:44.0

Is it possible to achieve this in Android OS?

I think you would need some kind of raw sockets. For that you can look to Raw Sockets on Android

Will I be able to update the WiFi protocol stack in Android?

Android is open source so you can try to modify it and load another Android firmware to your phone. For example, you have custom firmware versiones like the one you find at http://www.cyanogenmod.com/

Will it be feasible?

In my opinion it is possible but very difficult. Probably you can find a more feasible solution for your problem.

Any literature suggestions?

You can read this threat about how to download and edit Android source code: http://groups.google.com/group/android-kernel/browse_thread/thread/6e428031c5e70417/8d99386a62f7d75e?pli=1

Good luck.

Title: Does Handler.sendMessageDelayed() work when phone goes to sleep? Id: 9029709, Count: 120 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-01-27 06:25:35.0 Body:

I am developing an android application and I want to reduce the power consumption. The method I believe is to put the phone into sleep mode whenever the user activity stops for a certain threshold period. I have three questions regarding this.

  1. If I release the wakeLock and no other application is holding the wakeLock after how much time would the phone go to sleep?

  2. I have multiple HandlerThreads running where I use sendMessageDelayed() function. Would these messages get delivered even after the phone goes to sleep mode?

  3. Does putting the phone into aeroplane mode save more power rather than just putting the phone to sleep. if yes, then why is it because the only difference in those two modes is the use of cellular network.

Popularity: 8.0 Answer #9031794, count #1, created: 2012-01-27 10:20:01.0

If I release the wakeLock and no other application is holding the wakeLock after how much time would the phone go to sleep?

There really is no definitive answer, but, from personal experience, I'd say it is likely that it will happen within 30 seconds to 1 minute.

I have multiple HandlerThreads running where I use sendMessageDelayed() function. Would these messages get delivered even after the phone goes to sleep mode?

I really wouldn't count on it because I've never seen anything that says it will wake up the device to send said Message. You can always test it, but I wouldn't trust it to work because the documentation does not claim that it will.

Does putting the phone into aeroplane mode save more power rather than just putting the phone to sleep. if yes, then why is it because the only difference in those two modes is the use of cellular network.

If you put it into sleep mode AND airplane mode, then you will save more battery than JUST sleep mode.

The reason for that is that even with the CPU pretty much asleep, the phone must keep a constant connection with the cellular network in order to know if you get a text or phone call. To do this, it must use the battery to constantly keep the antenna turned on. If you put it into airplane mode, it would basically turn the antenna off, and then the phone would not be using battery for that function.

Title: Monitoring per Application power usage in android Id: 9062404, Count: 121 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-01-30 10:25:12.0 Body:

Is there a way to get the average current/power consumption of a particular application in Android? I could only find private API PowerProfile.java and PowerUsageSummary.java which give some information, but I am hitting a dead end, can someone please help?

Popularity: 11.0 Answer #9064081, count #1, created: 2012-01-30 12:45:25.0

Is there a way to get the average current/power consumption of a particular application in Android?

No, because applications do not consume current/power. Hardware does. If six applications are using WiFi, it is very difficult to "assign blame" for the WiFi power consumption, for example.

Now, even getting hardware power information is difficult in Android, as there are no public APIs for it, and most hardware is not instrumented particularly well to indicate what is consuming power. The Qualcomm MDP has great instrumentation for this, along with Trepn software to help you collect it, but it is rather expensive.

Title: Need help in debugging my android-java code Id: 9121591, Count: 122 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-02-02 23:27:52.0 Body:

Log cat shows "Runtime exception - cant create handler inside thread that has not called looper.prepare??I want to send location update of my phone to some other phone via sms after a fixed interval of time.Please help . Suggest ways to save power also

@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); et1 = (EditText)findViewById(R.id.editText1); et2 = (EditText)findViewById(R.id.editText2); b1 = (Button)findViewById(R.id.button1 ); t1 = new Timer(); t2 = new Timer(); lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); listener = new LocationListener() { @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } @Override public void onLocationChanged(Location location) { // TODO Auto-generated method stub SmsManager sm = SmsManager.getDefault(); String message = String.format( "New Location \n Longitude: %1$s \n Latitude: %2$s", location.getLongitude(), location.getLatitude() ); String number = "5556"; sm.sendTextMessage(number,null,message,null,null); } }; 

Schedule request update after every fixed interval

t1.scheduleAtFixedRate(new TimerTask() {

 @Override public void run() { // TODO Auto-generated method stub lm.requestSingleUpdate(LocationManager.GPS_PROVIDER, listener,null); } },0,300000); } } 
Popularity: 5.0 Answer #9121670, count #1, created: 2012-02-02 23:34:03.0

That error basically means "You are trying to do something in a background thread that you are not allowed to do in a background thread.

Your listener will handle the asynchronous nature of the location request. You do not need to thread it.

If you DO need things threaded, read up on Asynctask.

Answer #9121738, count #2, created: 2012-02-02 23:42:06.0

It’s not clear from the TimerTask documentation, but I suspect each task runs on its own thread. Also, the documentation for the LocationManager.requestSingleUpdate call you’re using says “If looper is null then the callbacks will be called on the main thread”, but I suspect this is wrong, because it disagrees with the one with the alternative signature, which says “If looper is null then the callbacks will be called on the current thread”. If the latter is correct, that would explain your problem, because you are calling requestSingleUpdate within your TimerTask thread, which has no Looper.

Perhaps use the result from getMainLooper instead of null for the last arg to requestSingleUpdate.

Title: Android Battery Application Id: 9187303, Count: 123 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-02-08 03:03:14.0 Body:

Hi Good day I am currently working on my thesis and one of my objective is to show the power consumption of a running application. Here's the scenario for example I'm playing angry birds and what my thesis should do is to show the power consumption ON THE SAME SCREEN (while PLAYING the said application) Is that possible? Can anyone help me on how to develop such application? If its not can you please give me a link that would explain that it is impossible? Because my professor would only believe my claim if I got an answer from a credible and reliable source.

I hope you guys can help me with this. I'm a newbie in android programming and I really need your help so bad.. Thank you so much XoXo :)

Popularity: 0.0 Answer #9187338, count #1, created: 2012-02-08 03:10:02.0

Here's a similar question already answered: Is it possible to run (and view) two apps simultaneously?

It doesn't appear so. This third-party software is the closest I've seen to something like that, but it will likely affect your power consumption data.

Title: Notification when modified file is closed Id: 9332516, Count: 124 Tags: <.net> Answers: null AcceptedAnswer: null Created: 2012-02-17 17:07:55.0 Body:

We search a reliable possibility to get notified when a file was modified by another application.

The .NET FileSystemWatcher class and the ReadDirectoryChangesW() function both suffer from the problem that they generate the event even if the other application has not finished writing yet.

Under Linux, inotify provides an IN_CLOSE_WRITE event - the file was closed after being opened for writing.

Is there any way to get that information on windows?

[Edit for clarification:] I'm primarily interested in when the writer finished its work, not about exclusive access / locking isuses. And I want to avoid polling / repeated try-and-catch-exception like solutions, as those are generally a bad programming style, and event based programming is better for power saving.

Thanks, Markus

Popularity: 5.0 Title: Constantly monitor accelerometer sensor on Android Id: 9335276, Count: 125 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-02-17 20:56:39.0 Body:

I need to have a service which constantly monitors the accelerometer sensor. I know that in the past it was not possible to do so without preventing the display from turning off, which is of course awful for battery life.

My question is - is there a good way to implement power efficient accelerometer monitor which would constantly run in the background?

Popularity: 14.0 Answer #9335328, count #1, created: 2012-02-17 21:01:26.0

have your Service implement SensorEventListener. Then register your Service to the SensorManager.

I checked the docs about sensors for you and I believe the header for SensorManager answers your questions. Every issue you have brought up was addressed in the second paragraph.

Answer #15314386, count #2, created: 2013-03-09 18:32:20.0

You cannot use the accelerometer when the screen is off reliably across all phones. It works for some phones, but not others. See https://code.google.com/p/android/issues/detail?id=11028 for an open bug report.

Also, this question is a duplicate of Android Workaround for non-working sensors when screen is off

Title: GPS Tracking App (strategy) Id: 9348399, Count: 126 Tags: Answers: 1 AcceptedAnswer: 9348793 Created: 2012-02-19 10:24:59.0 Body:

I am currently working on a GPS tracking App (only the distance, not storing each value or displaying in a MapView) for a car-drivers logbook. Cause of a docked phone, I do not care about power consumption. My implementation so far is an activity that calls a GPS-Helper class which is getting the long/lat. The activity itself calculates the driven distance, displays it for the user and starts a notification bar that also displays the distance. To keep the activity active and not killed by the OS, I am using a PARTIAL WakeLock for the activity.

My problem is that this is not working correctly, cause my App seems to be killed by the OS inspite of the WakeLock. I think that it is killed, cause when I click on the notification bar item (after 15-30 min. for example) to see the driven distance in my running activity, the activity is shown as it is to start a new GPS-track instead of displaying the driven distance from the former started track. The WAKELOCK Permission is correctly set in the Manifest.

My question now is to get know if this costruct could be working or is there a better way to do this?

Popularity: 8.0 Answer #9348793, count #1, created: 2012-02-19 11:35:55.0

Your problem may be with the Intent you are launching when you click on the notification. This intent is most likely thinking that you want to launch a brand new Activity rather than returning the old activity to the foreground.

This link may help you to do what you want:

How to bring Android existing activity to front via notification

Title: wifi power management Id: 9503285, Count: 127 Tags: Answers: 1 AcceptedAnswer: 9961263 Created: 2012-02-29 16:56:13.0 Body:

I want to do some power management research regarding the wifi interface of android phone. But I have found the wifi in android is really efficient. I used at&t nexus s phone. Due to my measurement by the power monitor, there is no tail power(timeout). As long as there is no data transfer, the wifi will go idle. And the transtion power(from idle to transfer mode) is nearly zero and there is no latency from idle to transfer mode. This is quite different from 3G interface. And even I use the API like WifiManager.disconnected to disconnect the wifi from the access point, the power saving from the idle mode is only about 20mW. And the transtion back(reconnect) latency is very high(about 10 second). So it looks like that nowadays the wifi interface is really power-efficient and there is no room to do much system-level power management. Am I right? :>

Popularity: 8.0 Answer #9961263, count #1, created: 2012-04-01 01:43:35.0

Yes.Anyway I have found that the wifi in nexus s is very efficient. It's nearly energy proportional with the receiving and sending speed. This is very different from 3G. In terms of 3G, there is an obvious transferring state and the power doesn't change a lot in the state even when the speed changes. Hope this information helps:>

Title: android relaunch app cause crash while screen blank to save power Id: 9633806, Count: 128 Tags: Answers: null AcceptedAnswer: null Created: 2012-03-09 12:06:58.0 Body:

I have an app that plays sound with MediaPlayer and MediaController for UI, I hope to show the MediaController's UI after MediaPlayer prepared the media, but it always crashes while screen is blank for save power mode. I found the android restarts my app again and shows the UI while the screen blank and shows "Unable to add window -- token null is not valid; is your activity running?", have any suggestion for me?

protected void onStart() { super.onStart(); Log.d(logTag,"**** Into onStart() ****"); // Load history. if(newMediaTarget>=0 && newMediaPlayPosition>=0 ){ Log.d(logTag,"The media target has changed, Prepare media index "+playingMediaIndex); playingMediaIndex=newMediaTarget; playerStartPosition=newMediaPlayPosition; preparePlayer(playingMediaIndex); } } private void preparePlayer(final int index){ if (!mp3Files[index].isExist()) { // from internet. FileSysManager.downloadFileFromRemote(index,FileSysManager.MEDIA_FILE); return; } Log.d(logTag, "Found it!"); if (mediaPlayer == null){ mediaPlayer = MediaPlayer.create(getApplicationContext(), Uri.fromFile(file)); mediaPlayer.setOnPreparedListener(mediaOnPreparedListener); } } class MediaPlayerOnPreparedListener implements OnPreparedListener{ public void onPrepared(MediaPlayer mp) { Log.d(logTag, "Media player prepared, bind control panel"); // The mediaController has create on onCreate() mediaController.setAnchorView(findViewById(R.id.rootLayout)); mediaController.setEnabled(true); //if(!isFinishing())mediaController.show(); Log.d(logTag, "Call show control panel"); showMediaPlayerController(); } } 

As you can see, if the mp3 file does not exist, the app will download from remote site, then wait for the download progress to finish, but always into saving power mode before download finish, then download thread and downloaded data has gone, that first problem.

Sometimes media file download finish before screen blank, then the device into power saving mode, the app run again from onCreate(), until the showMediaPlayerController(), the app try to bind a control panel for media player, then throw exception, I need help, thank you.

Popularity: 4.0 Title: Instruments doesn't show Energy Usage Level: it is empty Id: 9646238, Count: 129 Tags: Answers: 3 AcceptedAnswer: null Created: 2012-03-10 11:59:03.0 Body:

I have followed the Apple steps to get my app energy usage level from my device using Energy Diagnostics Instruments (https://developer.apple.com/library/ios/#documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/Built-InInstruments/Built-InInstruments.html#//apple_ref/doc/uid/TP40004652-CH6-SW63).

I have tried with my own app and with several apps from the AppStore and I always get all the info expected (including time flags) except the Energy Usage Info. Simply, the Energy Usage Bar doesn't show any graph (it does show the running time) and the Energy Consumption table (detail view) is always blank.

I am using an iPodTouch4 and XCode 4.2. What could be the problem? Do I need to do anything apart from Apple steps?

Thanks in advance.

Popularity: 14.0 Answer #9660333, count #1, created: 2012-03-12 00:25:55.0

I'm very new to the iOS Energy Diagnostics myself, so I may be wrong here...

The only time I've gotten an empty Energy Usage is when the device is connected to power. Unplug the device and have it log for a while, then connect it to Instruments and reimport your logs.

Answer #12749045, count #2, created: 2012-10-05 15:07:38.0

You have to do view snap track to fit to see the energy usage off the app. Then go to where the flag states, on battery and you will bars in the energy using statistics screen.

Now I'm also new to this and therefore do not really now how to interpret the data. If anyone could elaborate about his the answer would be better :)

Answer #13423014, count #3, created: 2012-11-16 19:20:01.0

I've found Energy Diagnostics to have lackluster accuracy, so my company made Powergremlin which is simple, but much more accurate in our tests. It's open source, so just download & build it in Xcode.

Title: Profiling Power consumption on ARM for C program Id: 9653480, Count: 130 Tags: Answers: 2 AcceptedAnswer: 9655839 Created: 2012-03-11 08:20:08.0 Body:

I have multiple C programs each doing the same piece of functionality. I want to evaluate/calculate which of these has a lower power consumption( on ARM) Is there some tool(simulator) with which I can simulate and get the number of power consumed and compare the same for each of the programs on desktop?

Based on this I will decide which of these apps I will finally put on the ARM.

Popularity: 14.0 Answer #9653539, count #1, created: 2012-03-11 08:31:45.0

This tool looks quite promising. It is part of the ARM RVDS 4.0 Pro.

It does non-intrusive performance profiling. It is propriety though. So it may be expensive. But there is a trial version too, which gives you about a month of free use.

If you are using gcc-arm, you can also try the GNU Profiler.

Answer #9655839, count #2, created: 2012-03-11 14:35:25.0

that is not something you can simply model and run, you would have to know the exact core and gate switches, etc and the apply that to the cell library and on and on. If you work with/for the company making the chip then ask the silicon team they might have a tool for that otherwise you have to measure power differences on a pcboard running the code on real chips. The arm rtl and the cell library properties are not available to the general public only folks that have paid for those items

Title: Prevent Android USB Host (OTG) from suspend Id: 9767052, Count: 131 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-03-19 08:35:22.0 Body:

I'm working on a USB gadget which is intended to be used with USB Host (OTG) enabled Android devices. My gadget consumes a fair amount of power (~30-40mA) and is completely powered by the host. I've been able to power and control it with my Motorola Xoom (Android version 3.2) but I got one big problem:

Every time the screen goes off, the USB host sends a suspend command which indicates to reduce power consumption to < 2.5mA! When the screen goes/is on USB gets resumed to normal power.

Since I absolutely need to have 'full power' all the time, please help me and post some hints how I could prevent the USB host to go to suspend mode. I tried a full wake lock, but it didn't help :(

Thank you in advance, Sebastian

Popularity: 8.0 Answer #9866991, count #1, created: 2012-03-26 05:23:26.0

Use a second 5v power source; just make sure to tie the negative to the USB.

Title: Bluetooth power consumption on Android Id: 9823026, Count: 132 Tags: Answers: null AcceptedAnswer: null Created: 2012-03-22 13:15:32.0 Body:

This is my first question and I think it will be hard to answer :)

My question is

Does anyone know how much consumed in mAh Bluetooth on Android?

I know it depends on the device, but I need a approximate value without much detail

Thank you very much to all

Popularity: 6.0 Title: Android accelerometer, sensor usage and power consumption Id: 9863131, Count: 133 Tags: Answers: 1 AcceptedAnswer: 9863347 Created: 2012-03-25 19:09:55.0 Body:

I have a quick question about the accelerometer in Android devices. Is it always on/active? Given that accelerometer is used to detect the orientation of the device, either landscape or portrait.

In the official documentation (SensorManager) it states that sensors should be turned off to save power. But I wonder if this only applies to others sensors like magnetic field sensors, gyroscope, light sensor and so on.

I need to make a case for power conservation and I don't want to make the mistake of saying that the accelerometer can at times be disabled, and instead use it for the purpose of disabling other sensors (in compass features of the application).

Or is the battery consumption by an accelerometer only related to an app being registered for receiving the data, while simply being "on" or enabled is not relevant since it always is?

Thanks for any clarification!

Popularity: 24.0 Answer #9863347, count #1, created: 2012-03-25 19:38:04.0

Or is the battery consumption by an accelerometer only related to an app being registered for receiving the data, while simply being "on" or enabled is not relevant since it always is?

That's correct.

The power consumption results from your app running and registered for sensor events. This keeps your app running all the time, keeps it consuming CPU, and potentially can keep the device from sleeping.

As far as I know, there's no way to shut down the sensors. Now, that is not to say that the device does not smartly shut down the sensors if there's nothing listening to them. I don't know that, but it seems likely. Regardless, again, the trigger is listening to them, so I don't think it makes a difference for your question.

Title: Is there any power mode selection feature for nvidia gpu? Id: 9870605, Count: 134 Tags: Answers: 1 AcceptedAnswer: 9871203 Created: 2012-03-26 10:38:08.0 Body:

Is there any power mode selection feature for nvidia gpu? for example, normal, high performance, or power save mode? If so, is it possible to select a power mode when I compile my cuda program by nvcc? And is it possible to check current power mode status of my gpu?

Actually I could not find any clue of this, though have searched a little long time from web.

Popularity: 9.0 Answer #9871203, count #1, created: 2012-03-26 11:23:49.0

Depending on your GPU you may be able to use nvidia-smi or NVML (check out the documentation for more information) to read the current power state, the GPU will dynamically change the power state to conserve power when idle and to provide performance when under load.

As a user it is not possible to set the power state of the GPU - the Tesla product line does have power-capping for the server products but that's not under user control obviously.

Title: SetSystemPowerState problems Id: 9880126, Count: 135 Tags: Answers: null AcceptedAnswer: null Created: 2012-03-26 21:26:54.0 Body:

I am using SetSystemPowerState to hibernate or put the computer to sleep.

I successfully give myself the SeShutdownPrivilege privilege and everything is good and dandy.

If I do SetSystemPowerState(false, true) the computer successfully hibernates.

If I do SetSystemPowerState(true, Kill); the computer enters the "power save mode" or however it is called (s1 ???). The monitor goes black but the computer doesn't suspend (sleep) . What's more, after calling this once, the computer doesn't sleep (if selecting the option manually from the start menu).

Normally there is no problem putting the computer to sleep... How could this be solved?

New info:

I've been experimented also with SetSuspendState. It causes the same result under even the simpliest of the programs (a window with a button).

Hibernation works well in both cases (prompt AND programatically).

Solved. It seems this was a well known problem with my graphic card's driver. It was not letting the computer fall asleep for x reasons. Updating it seems to have fixed the issue.

Popularity: 5.0 Title: What units of power/energy consumption have Energy Levels in the Energy Instrument for an iOS app? Id: 9917221, Count: 136 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-03-28 23:42:11.0 Body:

I'm actually measuring the energy consumption of my iOS application through the Energy instrument. I want to know the measure (e.g., in Joules) of the energy levels given by the Energy Instrument for an iOS app. Is there any relationship between the common energy consumption unit (Joules) and those energy levels? Thanks in advance for your response!

Popularity: 4.0 Answer #13423082, count #1, created: 2012-11-16 19:25:13.0

Energy Diagnostics reports power consumption as an un-united number between 0 and 20; we call them "electricities" at my office. Powergremlin gives you some insight into the actual numbers that make up said "electicity" units.

Title: AIR for iOS: Power saving Id: 9924255, Count: 137 Tags: Answers: 3 AcceptedAnswer: 9927446 Created: 2012-03-29 11:16:31.0 Body:

What coding tricks, compilation flags, software-architecture considerations can be applied in order to keep powerconsumption low in an AIR for iOS application (or to reduce powerconsumption in an existing application that burns too much battery)?

Popularity: 11.0 Answer #9924728, count #1, created: 2012-03-29 11:47:58.0

Generally, high power consumption can be the result of:

  • intensive network usage
  • no sleep mode for display while app idle
  • un-needed location services usage
  • continously high usage of cpu

Regarding (flex/flash) AIR I would suggest that:

First you use the Flex profiler + task-manager and monitor CPU and Memory usage. Try to reduce them as much as possible. As soon as you have this low on windows/mac they will go lower (theoretically on mobile devices)

Next step would be to use a network monitor and reduce the amount and size of the network (webservice) calls. Try to identify unneeded network activity and eliminate it.

Try to detect any idle state of the app (possible in flex, not sure about flash) and maybe put the whole app in an idle mode (if you have fireworks animation running then just call stop())

Also I am not sure about it, but will reduce for sure cpu and use more gpu: by using Stage3D (now available with air 3.2 also for mobile) when you do complex anymations. It may reduce execution time since HW accel is there, therefore power consumption may be lower.

If I am wrong about something please comment/downvote (as you like) but this is my personal impression.

Update 1

As prompted in the comments, there is not a 100% link between cpu usage on a desktop and on a mobile device, but "theoreticaly" on the low level we should have at least the same cpu usage trend.

Answer #9927446, count #2, created: 2012-03-29 14:30:03.0

One of the biggest things you can do is adjust the framerate based off of the app state.

You can do this by adding handlers inside your App.mxml

<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" activate="activate()" deactivate="close()" /> 

Inside your activate and close methods

//activate FlexGlobals.topLevelApplication.frameRate = 24; //deactivate FlexGlobals.topLevelApplication.frameRate = 2; 

You can also play around with this number depending on what your app is currently doing. If you're just displaying text try lowering your fps. This should give you the most bang for your buck power savings.

Answer #11307132, count #3, created: 2012-07-03 08:27:00.0

My tips:

  • Profile your App in a first step with the profiler from the Flash Builder
  • If you have a Mac, profile your app with Instruments from XCode

And important:

behaviors of Simulator, IPA-Interpreter packages and IPA-Test builds are different.

Simulator - pro forma optimizations

IPA-Interpreter - Get a feeling of performance

IPA-Test - "real" performance behavior

And finally test the AppStore-Build, it is the fastest (in meaning of performance) package mode. Additional we saw, that all this modes can vary.

Title: How to caculate power consumption of android app? Id: 10074254, Count: 138 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-04-09 13:54:05.0 Body:

Whether there is any formula ,Api or method for it? I want to test the power consumption of different apps.

Popularity: 19.0 Answer #10074540, count #1, created: 2012-04-09 14:16:00.0

Using adb tools you can view power consumption of each running app

adb shell dumpsys cpuinfo 

sample output

Load: 1.12 / 1.07 / 1.01 CPU usage from 11344ms to 1613ms ago: 10% 122/system_server: 5.8% user + 4.5% kernel / faults: 989 minor 0% 233/com.htc.android.wallpaper: 0% user + 0% kernel / faults: 910 minor 0.8% 271/com.htc.launcher: 0.8% user + 0% kernel / faults: 832 minor 0% 40/panel_on/0: 0% user + 0% kernel 0% 8/suspend: 0% user + 0% kernel 0% 54/synaptics_wq: 0% user + 0% kernel 0.2% 57/w1_bus_master1: 0% user + 0.2% kernel 0% 253/com.android.phone: 0% user + 0% kernel / faults: 3 minor 0% 13/kondemand/0: 0% user + 0% kernel 0% 56/curcial_wq: 0% user + 0% kernel 0% 2879/com.htc.bg: 0% user + 0% kernel / faults: 8 minor 0% 2904/dhd_dpc: 0% user + 0% kernel 0% 2906/com.google.android.apps.maps:NetworkLocationService: 0% user + 0% kern 

using

adb shell dumpsys batteryinfo 

can view each app battery usage and power consumption

to configure adb tools view http://developer.android.com/guide/developing/tools/adb.html

Title: Measure power consumption related to particular app Id: 10138087, Count: 139 Tags: Answers: 1 AcceptedAnswer: 10138125 Created: 2012-04-13 09:06:47.0 Body:

As a part of application tests, I need to somehow measure how much my app is draining the battery.

In Android there is a chart in the Battery Setting where I can see the percentage consumption, as well as CPU time or wake time, but problems arise while my app is using accelerometer continuously while running.

The acceleremoter also has the strongest element of standby usage, so I would like to estimate its particular usage, somehow.

Does acceleremoter battery-usage belong to Android System, or Android OS, or neither of them?

How can acceleremoter battery-usage be determined accurately?

Note: I have been thinking about measuring actual consumption using a multi-meter and electrodes attached between the battery and the device's power contacts, but I am scared about breaking my phone.

Popularity: 23.0 Answer #10138125, count #1, created: 2012-04-13 09:10:22.0

taken from How to caculate power consumption of android app? :

Using adb tools you can view power consumption of each running app

adb shell dumpsys cpuinfo

sample output

Load: 1.12 / 1.07 / 1.01 CPU usage from 11344ms to 1613ms ago: 10% 122/system_server: 5.8% user + 4.5% kernel / faults: 989 minor 0% 233/com.htc.android.wallpaper: 0% user + 0% kernel / faults: 910 minor 0.8% 271/com.htc.launcher: 0.8% user + 0% kernel / faults: 832 minor 0% 40/panel_on/0: 0% user + 0% kernel 0% 8/suspend: 0% user + 0% kernel 0% 54/synaptics_wq: 0% user + 0% kernel 0.2% 57/w1_bus_master1: 0% user + 0.2% kernel 0% 253/com.android.phone: 0% user + 0% kernel / faults: 3 minor 0% 13/kondemand/0: 0% user + 0% kernel 0% 56/curcial_wq: 0% user + 0% kernel 0% 2879/com.htc.bg: 0% user + 0% kernel / faults: 8 minor
0% 2904/dhd_dpc: 0% user + 0% kernel 0% 2906/com.google.android.apps.maps:NetworkLocationService: 0% user + 0% kern

using

adb shell dumpsys batteryinfo

Title: Most power efficient location tracking implementation? Intent/BroadcastReciever/LocationListener/Service/IntentService/AlarmManager? Id: 10173029, Count: 140 Tags: Answers: 2 AcceptedAnswer: 10173170 Created: 2012-04-16 11:10:53.0 Body:

I have been through solutions and found that there are a lot of ways to implement a location tracking logic.

  1. Intent+BroadcastReceiver+LocationListener
  2. Intent+IntentService+AlarmManager
  3. LocationListener
  4. other techniques or different combinations of the above...

I am trying to find out what would be the best practice(most power efficient way) to achieve the same...

I have a library class MyLocationClass.java which has two methods 1. startTracking() - Start sending user location after T time and only if the user has moved X distance 2. stopTracking() - stop sending location updates.

A simple answer seems to be a LocationListener because of its inbuilt Time Passed/Distance Moved features which provides a better user experience, but then...

I dont want the GPS to be on all the time? In fact it should be only switched on when T and X are crossed. Will using a Service/IntentService and/or/with an alarm manager Timer will be a better solution? Can BroadcaastReceivers with LocationListeners prove to be the better solution?

Please suggest. Thnx

Popularity: 12.0 Answer #10173170, count #1, created: 2012-04-16 11:23:26.0

I think, you can use the Location Listener implemented in a service. Start listening the GPS when the service starts and remove the GPS listener when the service stops. Start this service when you wants to listen to GPS.

Also please take a look at the "requestLocationUpdates" method. The minTime and minDistance fields of this function is explained as follows:

minTime the minimum time interval for notifications, in milliseconds. This field is only used as a hint to conserve power, and actual time between location updates may be greater or lesser than this value.

minDistance the minimum distance interval for notifications, in meters

Answer #10173969, count #2, created: 2012-04-16 12:22:09.0
 > Most power efficient location tracking implementation? 

The most power-efficient (and inacurate) solution is to not use gps at all because the gps-receiver (at least on my samsung-smartphone) increases powerconsuption significantly.

if your android app is running on a cell phone that is logged in to its cellphone provider there are cellphone cellid-broadcasts that tell the phone that the cell-positon has changed. Your app can receive these. too.

The cellphone-receiver is already running and the cellphone has to wake up to keep connected so there is minimal energy overhead.

however you must find a way to translate the cell-id back to a location.

The android app llama uses this cellid-broadcast to trigger location based actions. It has a learning mode where i can say: "Now i am at my job for 7 hours. In the background it collects all cellid-s and mark these as "job".

Title: How quickly will Android go to sleep if I stop background service Id: 10241426, Count: 141 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-04-20 06:38:39.0 Body:

How quickly does Android go to sleep after wakelock is released?

I am working on an app which uses the accelerometer in the background, and because of its high power consumption I am investigating how best to allow the device to go to sleep, at least for every other minute or so.

I cannot let the device go to sleep for as long as an hour, or even 10 minutes -- that would be only possible in the case where I would develop some kind of intelligent scheduler based its predictions of user behavior. I do not have enough time for this.

Thus I am asking if when I let my app go to sleep, and actually unregister the sensor update listener, would the device even go to sleep in such a small time interval? (I would think that there would be some timeout to prevent runnning all that stuff that is necessary when device goes out of standby mode.)

I am using the slowest accelerometer mode: SENSOR_DELAY_NORMAL. Right now, my service holds the device on wakelock all the time. This only lasts for about 12 hours of standby -- getting twice as much would be sufficient.

Popularity: 4.0 Answer #10245330, count #1, created: 2012-04-20 11:27:58.0

Does anybody know how quickly goes android to sleep after wakelock is released?

It could be in under a millisecond.

Title: iOS Testing with Instruments - Best Practices Id: 10246188, Count: 142 Tags: Answers: null AcceptedAnswer: null Created: 2012-04-20 12:23:27.0 Body:

Since I'm developing iOS based applications, I'm also spending much time on getting started with testing applications and debugging performance issues.

After reading the instruments user guide, I'm now able to detect memory leaks, record the current memory size and cpu-usage and much more, which quiet helps a lot!

Now, in order to improve my testing strategy, I'm looking for a kind of bench mark values or standard values for the different instruments (such as cpu- usage, energy consumption, ...). You know what I mean? For example: I have a cpu-usage of 80% over a period of 10 seconds. Is that okay or should I think about performance optimization? Sure, in case of cpu-usage, it depends on the action that the app does over that period of time (e.g. reloading some data or something like that) but is there any rule of thumb or best practices for that?

I already did a research on the internet and only found a video of the iOS tech talk in london from Michael Jurewitz. In that talk, i figured out the following statements that are properly useful for me:

  • Activity Monitor: Can only be used to compare your app's resource usage to other apps
  • Allocations: A constantly growing allocations chart is obviously a bad sign for memory leaks. Allocations does not show the real memory size the app uses
  • VM Tracker: Show overall memory size; Rule of thumb: more than 100 MB dirty size of an app is far too much
  • ...

Now I need some "rules of thumb", especially for the CPU Monitor (where is the boundary between good case and bad case?) and Energy Consumption (Level..?).

Do you have any tips for me or do you know some articles I can read through?

Thanks a lot!

Philipp

Popularity: 4.0 Title: How to detect a vessel moving or stationary using accelerometer? Id: 10280460, Count: 143 Tags: Answers: 4 AcceptedAnswer: null Created: 2012-04-23 12:22:55.0 Body:

Anyone has any experience using accelerometer on a ship/vessel to detect whether the vessel is moving or stationary?

The difficult part is that I took some samples at 6Hz and found that all X/Y/Z axes show the accelerations going positive and negative. The pattern is exactly the same when the vessel is moving and stationary. I suppose these are caused by ocean waves.

Any ideas?


Thank you all for the suggestions.

The reason that I have to use an accelerometer to detect movements on a vessel is that my device (a custom embedded system) goes into sleep/hibernation most of the time to save power and it only wakes up to perform some tasks after it detected movements. GPS consumes too much power and therefore it can't be powered on all the time.

Actually I do not need to measure the velocity or position, all I need is to differentiate moving and stationary at sea. Accelerometer works fine on land, but not sea probably due to the random ocean waves.

I have also considered using a gyroscope, however, it is too expensive.

Any suggestions?

Popularity: 10.0 Answer #10280583, count #1, created: 2012-04-23 12:30:47.0

You can use GPS receiver to check whether the vessel is moving or stationary. Using GPS API, you can get the position of the vessel on earth, moving direction, and speed of the vessel.

A good tutorial on GPS tracking is here: http://www.vogella.com/articles/AndroidLocationAPI/article.html

Answer #10281019, count #2, created: 2012-04-23 12:58:25.0

Vishnu Haridas is right, gps is a better way...

in case you need your app on a device without a gps (as iPod touch) accelerometer is not the right hardware to use on a ship: as its name suggests, it register variations of velocity, not velocity. consider a ship constantly moving forward at 10 miles/hours, its variation of velocity is equal to 0, so accelerometer could give you just 0 (but waves of engine vibrations...)

Answer #10281254, count #3, created: 2012-04-23 13:14:28.0

I wrote an application to log flight times using the GPS. Events are detected based on the speed reported by the GPS plus some heuristics. E.g. block-off (plane quits parking and begins taxiing) events are detected when the speed exceeds a threshold (its configurable, 3kts work just fine). Similarly block-on (plane parks after landing and taxiing to the parking position) is detected by the speed falling below a threshold for some time. Here you could also use the position, but the speed works better (is less noisy). For wind correction / drift (irrelevant when taxiing but useful to detect landings and touch-and-go) the user can enter an estimate of wind direction and speed. This value is used for the correction of the speed reported by GPS. This all works just fine w/o using the acceleration sensors. I recommend to gibe the GPS a try. The only risk seems to be that the lower speeds involved when on a boat/vessel may result in a worse speed to noise ratio and of course drift is an aspect here. (But in my opinion that is also problematic when using the acceleration sensors because of swell.)

Answer #10282480, count #4, created: 2012-04-23 14:25:38.0

It is impossible to tell from the accelerometer data alone whether the ship is moving with a constant speed or lain at anchor: the acceleration is zero in both cases.

Either use just the GPS or try sensor fusion (GPS + accelerometer) if you need better precision. Either way, you will need the GPS.

Title: Calculate Android sensor power consumption Id: 10293713, Count: 144 Tags: Answers: 1 AcceptedAnswer: 10294103 Created: 2012-04-24 07:39:57.0 Body:

getPower() returns the power in mA used by a sensor while in use:

Now, I need to calculate how much battery is used by the registration of the sensor.

Does he value returned by getPower() indicate the mAH (mA per HOUR) or something else? If yes, is there a way to get the battery mAH in order to calculate the % of battery used by the sensor?

Popularity: 19.0 Answer #10294103, count #1, created: 2012-04-24 08:10:00.0

Something quite related has been discussed in Google groups not too long ago. You can find the full thread here for reference.

A small excerpt from the last reply in that thread, which should answer your question more or less:

(...) the battery capacity is always given in terms of mAH. (...) What matters is how long a battery can supply a given current at its rated voltage. 3800mAH means that it can supply 3800mA for 1 hour. Knowing this it makes sense now that the API is providing the current drain as a metric of power consumption. You can now calculate how much effect it will have on the battery life as a function of time.

Title: Pure C Code For Video Encoding Id: 10372334, Count: 145 Tags: Answers: 1 AcceptedAnswer: 10372356 Created: 2012-04-29 13:01:34.0 Body:

I'm doing some computer hardware architectural explorations and I was eager to test different tasks on my prototype. so I need some code to simulate the task of video encoding and/or decoding (H264 would be perfect but other codecs are also ok).

Is there anything that I can use? It doesn't have to be exactly encoding/decoding, just some code that can roughly estimate the same workload with same kind of computations so I can get some performance/power consumption results.

Oh yeah and it's gotta be in "pure C", and without using any sophisticated libraries (math.h is fine) since I'm gonna have to put that onto a hardware module.

Thanks in advance

Popularity: 6.0 Answer #10372356, count #1, created: 2012-04-29 13:05:12.0

Have a look at libavcodec. It is pure C.

Title: Is it possible to know last user activity from a Windows service? Id: 10405465, Count: 146 Tags: Answers: 2 AcceptedAnswer: 10405778 Created: 2012-05-01 22:48:11.0 Body:

I have a Windows service program (written using C++) that is required to perform an energy saving power operation at a certain time of the day. I need to find out if a user might be at the terminal at the time when the power operation is performed and if he/she is, postpone it then. So my question is, how do you know the moment of the last user activity from a Windows service (running as a local system)?

PS. By user activity I mean keyboard and mouse activity.

Popularity: 4.0 Answer #10405778, count #1, created: 2012-05-01 23:31:14.0

Each user session will have to run a background app within its session that communicates with the service, then the app can report the last activity time so the service can make decisions based on that.

Answer #10407600, count #2, created: 2012-05-02 04:11:53.0

Depending on how deterministic it has to be, you could use the task scheduler. It can trigger the task when the computer is idle, wait for a while for it to be, etc. You can add the task manually to begin, then use the API.

Task Scheduler power settings

Title: Perfmon - Refresh rate of power meter Id: 10420586, Count: 147 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-05-02 19:45:11.0 Body:

I'm writing a tool to collect information about power consumption of notebooks. I need to measure the current power consumption, and I use Perfmon to do so. But I found a strange bug. Here is the typical graph of power consumption (this is "Power Meter" - "Power" - "_Total"):

Measurements are updated about once every 10-15 seconds.

But if the run Everest (or AIDA64) Power Management tab will be updating this more often, the results are more accurate:

Measurements are updated about once every 1-2 seconds.

I do not understand what happens when we run Everest. I really need to get accurate data. Do you have any ideas?

I would really appreciate any suggestions in this regard.

Popularity: 3.0 Answer #10420908, count #1, created: 2012-05-02 20:10:10.0

may be this link is useful for you.

http://www.brighthub.com/computing/linux/articles/13876.aspx

Regards, Juan

Title: Reducing power consumption Id: 10468305, Count: 148 Tags: Answers: 1 AcceptedAnswer: 10469013 Created: 2012-05-06 05:15:30.0 Body:

For my application, I want to have a Notification sent to the user at a specified time. To do this, I have set up a Timer and the corresponding TimerTask. To be sure the Notification will be sent to the user even if the phone is aslept, I have acquired a PARTIAL_WAKE_LOCK. The problem is that this method draws a lot of power from my battery (my application is responsible for more than 50% of all the power consumption at the end of the day).

Is there another way (a more power efficient one of course) to do what I want to do?

Thanks in advance for the time you will spend trying to help me.

Popularity: 4.0 Answer #10469013, count #1, created: 2012-05-06 07:50:55.0
 > Is there another way (a more power efficient one of course) to > [have a Notification sent to the user at a specified time]? 

You can use an the android AlarmManager for this.

See Using AlarmManager to Schedule Activities on Android as a tutorial and example.

Title: Simulation in energy efficiency in Grid Id: 10528334, Count: 149 Tags: Answers: null AcceptedAnswer: null Created: 2012-05-10 06:04:48.0 Body:

i am planning to do my grad project on energy efficiency in Grid networks with data streaming. my plan is to simulate the movement of data and calculate energy consumption in the network. before starting any coding i would like to select a simulation tool which support data stream in large-scale networks and energy issue. I am thinking about using ns2 or GridSim. But before choosing one, I would like to know of any other simulators that i can use. What is the best simulation tool to support energy issue and data flow for large-scale data-intensive networks?, thanks in advance.

Popularity: 2.0 Title: Low power sensor module Id: 10578828, Count: 150 Tags: Answers: 1 AcceptedAnswer: 10590709 Created: 2012-05-14 07:06:30.0 Body:

My next project requires me to have efficient wireless sensor modules. Basically, these modules should be able to read temperature, light, etc. sensor data and output it over its wireless transmitter/receiver. It can be any sensor and also it MUST be a transmitter and receiver.

How do I get it to very low power?

I would like this setup to run for a year, maybe six months but the current prototype I have with an Arduino chip and an XBee module sending data every minute cleans out a 9 V battery in an hour. I have read a lot about this problem and wanted to know if XBee's are out of question. My worry is not the microcontroller, it is how to get efficient wireless communication while maintaining the budget on power consumption. Basically, what is the best low power wireless module out there?

Popularity: 13.0 Answer #10590709, count #1, created: 2012-05-14 20:41:11.0

Let's look at the science.

Six months on a small battery?

We'll need one with low self discharge characteristics and high capacity.

A 3.6 V LI-Ion might do the trick.

Checking out the Small Battery Companies website, we could use a Prismatic Li-Ion 14 mm x 34 mm x 47 mm that has 1800 mAh. That is about the size you mention.

Let's use a high effeciency buck boost DCDC converter to suck every ounce of juice of it. So let's assume an average of 90% effeciency, but using a DC/DC converter we can probably discharge below the recommended voltage and get more out.

In six months there are 0.5*365.25*24 hours = 4383 hours.

(1.800 Ah/4383 hours)*0.9 = 369 μA average.

Picking an XBee module at random, let's assume your tranciever takes 45 mA at 250 kbit/s.

Let's assume you have 1k byte of data to send and receive every minute.

2 * 1024 * 8 bits = 16384 bits. = 66 ms * 60 = 3.96 seconds per hour (or 0.0011 hours)

So we need to wake up for 3.96 seconds every hour and take 45 mA, the rest of the time we sleep and take 1 μA (for the radio), let's ignore the CPU for now.

((1-0.0011) * 1 μA) + (0.0011 * 0.045 A) = 50 μAh (50 μA averaged over 1 hour)

This looks promising, we've still got more than 300 μA to play with.

I don't know what Arduino you are using, but looking at the datasheet for an ATMega168A we have 0.75 μA in power down mode and 200 μA in active mode, CPU vendoes love to quote impossible figures, so let's assume more. Let's assume 1 μA in power down and 1 mA in active.

((1-0.0011) * 1 μA) + (0.0011 * 1 mA) = 2 μAh (2 μA averages over 1 hour)

So, assuming you don't spend all your power budget on the CPU, spend a lot of time getting the other components as effecient as possible and don't use LEDs, it might just work.

Title: How to disable Windows wireless power saving on the fly? Id: 10611356, Count: 151 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-05-16 02:44:03.0 Body:

We have some issues with the wireless power saving. If we set the Power Saving Mode of Wireless Adapter Settings to Maximum Performance in the current power plan, all issues are gone. However, this does have an impact on the battery life. So we would like to find a way to turn off Wi-Fi power saving programmatically, only when our program is running.

We tried setting OID_DOT11_POWER_MGMT_REQUEST, but it failed with 0xC0010017, which means NDIS_STATUS_INVALID_OID. Querying is OK, though.

Another approach is to modify the current power scheme, but it may cause problems and confusions if the user switches the power scheme when our program is running.

Does a guru here know a better way? Thanks in advance.

Popularity: 4.0 Answer #12996081, count #1, created: 2012-10-21 08:09:39.0

OID_DOT11_POWER_MGMT_REQUEST shouldn't fail upon set. You should file a bug for your WiFi vendor. Note, that the implementation of this oid is vendor responsibility thus the real power save you get is totally dependent on vendors' driver implementation and device characteristics.

Title: AddProximityAlert power consumption on Android Id: 10663479, Count: 152 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-05-19 08:06:31.0 Body:

I am developing a GeoFencing Android App that notifies user whenever they are close to a certain region (similar to Start Monitoring for Region in iOS).

I know I can use AddProximityAlert in Android but I am concern about the power consumption as I will add around 50 geo points.

The question is: would the power consumption change on adding more points ? or it will stay the same regardless how many points I added ?

Popularity: 5.0 Answer #10663681, count #1, created: 2012-05-19 08:35:27.0

You'd have to make tests to know for sure :-) But my understanding is that the LocationManager checks periodically the current position (one "expensive" GPS check every 4 minutes if screen is off, more often if it's on, couldn't find how often) and then compares the location with all the registered center-points. So the power consumption, GPS-wise, is limited, and after that it's just "normal" CPU consumption created by checking an array of points.

HTH

Title: Android: How to get the device Power Consumption amount? Id: 10690516, Count: 153 Tags: Answers: 0 AcceptedAnswer: null Created: 2012-05-21 18:22:48.0 Body:

Is there any way to do get without having to hook a Voltmeter in the device? Maybe from the Dalvik Debug Monitor?

Popularity: 14.0 Title: Measuring power usage of a C++ program Id: 10766587, Count: 154 Tags: Answers: null AcceptedAnswer: null Created: 2012-05-26 13:15:10.0 Body:

I have a C++ program for which I need to measure the power consumption it uses. I first tried using Simplescalar and WATTCH, but neither worked. They did work for simple C programs, but when I tried them for simple C++ programs, I would get errors like "iostream not defined" (or something like that).

So first of all, has anyone gotten SimpleScalar (or WATTCH) to work with C++ programs? If so, could you please guide me to the exact instructions you followed? Because I have practically installed it around 20 times from different sources, but I always got the same error when compiling C++ programs.

Second, any suggestions for other tools I can use to simulate the power usage of a C++ program? I will be running my program on a Unix system.

Popularity: 15.0 Title: Reducing power consumption of Bluetooth server (accept) Id: 10777741, Count: 155 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-05-27 22:29:58.0 Body:

I am interfacing with a Bluetooth embedded device, and I want it to reconnect automatically when android phone is in reach. So far got it working with couple of issues. What I also noticed is that while phone is blocked on accept() call to the socket, it still consumes quite a bit of power. Nothing to compare with the Car speaker phone for example that does not seem to influence battery much. I was wondering if people have any trick to be more power friendly?

Also, currently the accept works on the AcceptThread.run (as in the Bluetooth Chat example), but not in a service. Should I move it to one? Any pointer to how to do accept in a service and move it to thread/activity will be appreciated.

Popularity: 2.0 Answer #10778998, count #1, created: 2012-05-28 03:22:09.0

The accept (AcceptThread) code will be exactly the same whether it is in a service or an activity/Application. Whether you should move it into a service depends on whether you need it to keep running after the app closes. If you don't need to move it to a service then don't bother because it makes your app's structure and life-cycle a little more complicated.

I'm rather surprised at your observation that the accept causes increased power consumption. When you create a socket and call accept on it then Android adds your UUID to the list of available Bluetooth services - it doesn't put the Bluetooth radio into a different mode, so why would it cause increased power consumption?

Title: When to register for location updates? Id: 10795692, Count: 156 Tags: Answers: 2 AcceptedAnswer: 10796095 Created: 2012-05-29 08:36:29.0 Body:

I am writing an Android application that requires the user's current location. I register for location updates both from the network, and from GPS using the following code (locationManager is already defined):

 // Register the listener with the Location Manager to receive location updates. locationManager.requestLocationUpdates( LocationManager.NETWORK_PROVIDER, getResources().getInteger(R.integer.gps_min_time), getResources().getInteger(R.integer.gps_min_dist), this); locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, getResources().getInteger(R.integer.gps_min_time), getResources().getInteger(R.integer.gps_min_dist), this); 

I currently have this code in onCreate, but in order to save power, I remove both listeners in onPause and add them both again in onResume.

When the application starts, it adds both listeners twice, once in onCreate, and once in onResume. I have two question about this:

  1. Does having each listener added twice mean that it actually gets added twice, or does the second call have no effect?
  2. Should I remove the requestLocationUpdates from onCreate and just have them in onResume or should I remove all listeners first in onResume before adding them again?
Popularity: 3.0 Answer #10795998, count #1, created: 2012-05-29 08:58:02.0

I would just add them in onResume(). Create the manager in onCreate() and add and remove listeners in onResume() and onPause().

I don't have an answer to your first question.

Answer #10796095, count #2, created: 2012-05-29 09:04:40.0
  1. your Listener will be added as a key to a Hashmap. So If you insert this (Your Activity) and don't override equals it should be overridden, and has no effect, because it is the SAME Listener.

  2. Anyway, I would remove the register inside onCreate and just leave it inside onResume, as you will save at least the time to Hash your listener and the Method calls.

Title: TCP IP Listener Android Id: 10804514, Count: 157 Tags: Answers: null AcceptedAnswer: null Created: 2012-05-29 18:17:55.0 Body:

Here is the idea behind what I am trying to accomplish, I am currently communicating with Bluetooth sensors and sending the data at periodic intervals to a remote server over Wi-Fi, now in between transmissions I put the system in sleep mode to save power.

However, I want my processing thread to be interrupted from it's sleep upon reception of a packet from the server. I know I can do this simply by creating another thread in an infinite work loop and waking it up to check for incoming data from the socket upon time intervals, but I was wondering whether there was a nicer way of accomplishing this, e.g. registering a listener, starting a background service to check for reception, creating a custom broadcast receiver ... etc.

I have tried searching online with no success, I don't want the code to do this, but maybe a few links to point me in the right direction so that I can do it myself. Thanks

Popularity: 5.0 Title: Energy consumption of smartphone components Id: 10825162, Count: 158 Tags: Answers: 7 AcceptedAnswer: null Created: 2012-05-30 22:28:44.0 Body:

I'm looking for a list of all the components and their power drainage on an up-to-date smart phone.

  • Accelerometer, gyroscope, magnetometer, etc.
  • Display
  • WiFi
  • Bluetooth
  • GPS
  • CPU
  • Camera
  • Microphone
  • etc.

Preferably in mA so it can be easily compared to the battery's capacity (usually specified in mAh).

The Sensor's power is actually available via the SDK and can also easily figured out for most devices on AndroidFragmentation. However what I'm looking for is comparable data for the other hardware components to consider their efficency.

Bonus: Will a request for less frequent updates of a Sensor decrease energy consumption of the Sensor, as it returns only one value for getPower()?

Popularity: 31.0 Answer #10827419, count #1, created: 2012-05-31 04:23:04.0

Display is the most power consuming part of the smartphones; accounting for up to 60% of total battery life (Can draw power up to 2W). There is a book called Green IT and its Applications; you can read it online in Google books.

Answer #10861437, count #2, created: 2012-06-02 10:08:13.0

On any modern Android, go to Settings > Battery (sometimes Settings > About > Battery). You should see a graph of power drain over time, as well as how much was used by what component.

Although consumption varies a lot based on usage patterns, in my experience the top consumers are display, radio, and CPU. I have not seen sensors rank high in energy use on any of my devices, in the absence of bugs. The link provided by Yusuf X places game sensors above CPU.

For more information about optimizing battery use on Android, see Reducing the battery impact of apps that downloads content over a smartphone radio and Optimizing Battery Life.

Answer #10861454, count #3, created: 2012-06-02 10:10:38.0

There was a GoogleIO session on this very subject a few years ago; you can see the video and slides pdf here.

Answer #10866181, count #4, created: 2012-06-02 21:43:30.0

There is an app called PowerTutor that it does some battery consumption measurements for every phone component and for every process. The code is open so you can pick up some ideas from there. Note that this app was optimized for Google's phone, especially for the Nexus One.

Answer #10870458, count #5, created: 2012-06-03 12:49:28.0

I'm looking for a list of all the components and their power drainage on an up-to-date smart phone.

That is impossible to answer.

First, different devices will use different varieties of these components, with different power characteristics.

Second, many, if not most, of those components will have no published power statistics, or the specific components themselves may not be knowable without a complete teardown of a device.

Will a request for less frequent updates of a Sensor decrease energy consumption of the Sensor, as it returns only one value for getPower()?

That will depend on the sensor. Some sensors are effectively always "on" (e.g., ambient light sensor), courtesy of the OS, in which case the only incremental power drain for your use of that sensor will be in passing that sensor data to your process. Other sensors might not be regularly used by the OS, meaning that your request for events from that sensor might turn it "on", resulting in power drain from the sensor itself in addition to supplying you with that data.

It would be truly wonderful if all Android devices were instrumented in the way the Qualcomm MDP is, so that we could get fine-grained power detail for our apps and their usage of various components.

Answer #16215708, count #6, created: 2013-04-25 13:11:57.0

I know it's against the rules to plug your own startup, but what you're asking sounds exactly like what we're working on.

There's an Android performance monitoring tool called "Little Eye Labs". It shows real-time power consumption of an App as it runs on a phone. It currently only supports CPU, Display, GPS, Wifi and 3G, but you'll be able to get the instantaneous power consumed (in mW) by these components.

/end of plug

Note that there's no real way to get this information directly from a device, so the best you can do is model the phone, gather device resource consumption, and model power usage.

Answer #16555788, count #7, created: 2013-05-15 02:27:43.0

There are a couple of detailed studies that I am able to find on this.

  1. A study from the USENIX meeting in 2010 which studies various components of a smartphone (except the camera)
  2. Another study from the Hotmobile mobile computing workshop 2013 that has more information on cameras and displays.

Reference 1 especially seems a great starting point.

Title: Android remote service, how to do a scheduled task? Id: 10908666, Count: 159 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-06-06 05:35:10.0 Body:

I am writing an application which has a remote service running. And i need to do a gps task every 15 minutes. Will a handler postDelayed functionality guarantee that it will trigger every 15 minutes without keeping the service in foreground? If not, is there some other way to do this ? (i do not want to keep in foreground- i guess it might result in a lot of power consumption)

Popularity: 4.0 Answer #10908740, count #1, created: 2012-06-06 05:44:57.0

You can use AlarmManager to trigger a service in ever 15 minutes.The service will do the gps things for you.After finishing the task you can stop the service.It will guarantee that your service will be started at every 15 minutes and moreover service will stops after fulfilling its purpose

Answer #10908780, count #2, created: 2012-06-06 05:49:29.0
*Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask(){ public void run() { //write code here you want to execute in every 15 minutes } }, new Date(), 900000);* 
Title: Energy efficient GPS tracking Id: 10920904, Count: 160 Tags: Answers: 2 AcceptedAnswer: 10921127 Created: 2012-06-06 19:32:37.0 Body:

I am making an application that requires user to send their GPS location to the server. I need it to be done for say every 10 seconds, which is heavy on the energy budget.

Is there any open source implementation, where I can take GPS location once and then use accelerometer and compass to track the user location?

Or anything else which is energy efficient?

Popularity: 16.0 Answer #10920984, count #1, created: 2012-06-06 19:37:21.0

I would say build your own so you can get exactly what you want and avoid running extra code since you are concerned about the energy usage. I would do exactly what you suggested and use the GPS sparingly, maybe every 30 seconds or every minute to get a reference for your calculation and then use the compass and accelerometer in the interim.

Answer #10921127, count #2, created: 2012-06-06 19:48:17.0

Here is a great opensource location tracking library, its even been recommended by Google.

Title: Interupts Vs Poling a Device Id: 10929875, Count: 161 Tags: Answers: 3 AcceptedAnswer: 10929983 Created: 2012-06-07 10:18:45.0 Body:

In my application a no. of devices (camera, A/D, D/A etc ) are communicating with a server. I have two options for saving power consumptions in a device as not all devices has to work always:

1- Do poling, i.e each device periodically keep on looking at a content of a file where it gets a value for wake or sleep. If it finds wake, then it wakes up and does its job.

In this case actually the device will be sleeping but the driver will be active and poling.

2- Using interrupts, I can awake a device when needed.

I am not able to decide which way to go and why. Can someone please enlighten me in this regard?

Platform: Windows 7, 32 bit, running on Intel Core2Duo

Popularity: 10.0 Answer #10929983, count #1, created: 2012-06-07 10:26:40.0

Polling is imprecise by its nature. The higher your target precision gets, the more wasteful the polling becomes. Ideally, you should consider polling only if you cannot do something with interrupts; otherwise, using an interrupt should be preferred.

One exception to this rule is if you would like to "throttle" something intentionally, for example, when you may get several events per second, but you would like to react to only one event per minute. In such cases you often use a combination of polling and interrupts, where an interrupt sets a flag, and polling does the real job, but only when the flag is set.

Answer #10931373, count #2, created: 2012-06-07 12:02:16.0

If your devices are to be woken up periodically, I would go for the polling with the appropriate frequency (which is always easier to setup because it's just looking at a bit). If the waking events are asynchronous, I would rather go for an interrupt-driven architecture, despite the code and electronic overhead.

Answer #11009267, count #3, created: 2012-06-13 06:22:28.0

Well it depends on your hardware and software atchitecture and complexity of software. It is alwasy better to choose interrupt mechanism over polling.

As in polling your controller will be busy continuously polling the hardware to check if desired value is available.

While using interrupt mechanism will free the controller to perform other tasks, and when interrupt arises your ISR can perform task for specific need.

Title: Changing brightness of display (C#) Id: 10981518, Count: 162 Tags: Answers: 1 AcceptedAnswer: 10981753 Created: 2012-06-11 13:55:14.0 Body:

Possible Duplicate:
C# setting screen brightness Windows 7

I searched online for some topics about chanding the brightness of the display through C#.
For the most part, I got links to change the gamma in Windows (here & here) and this is working fine for me. But I was wondering if this is the correct way of reducing the brightness or dimming the display (does this save power as reducing the brightness of monitor does?)

Is this a good way to reduce brightness or is there a better way to do the same? I'm on Windows 7 (I forgot what the default gamma value of windows is?! Somebody?)

Popularity: 13.0 Answer #10981753, count #1, created: 2012-06-11 14:08:45.0

Contrast/Brightness are properties inside of the physical monitor; not the software. Windows only knows Gamma. Most tools and guides you will find, will secretly edit gamma, which is obviously not the same as brightness/contrast.

But I did find this link: "How to Control the ‘Real’ Brightness and Contrast of Monitors by Software"

This is not a technical explanation of how it's done, it only lists problems with common 'tools' that claim to be able to do it. Instead it demonstrates the use of a couple of programs that actually communicate with the monitor. But the monitor as well as the video card, need to support the DDC protocol.

Maybe you can use this DDC protocol to roll your own approach in C#. There might even be libraries already, but if not; it will be a difficult implementation I guess.

Title: which code is consuming less power? Id: 11011260, Count: 163 Tags: Answers: 5 AcceptedAnswer: 11011750 Created: 2012-06-13 08:47:46.0 Body:

My goal is to develop and implement a green algorithm for some special situation. I have developed two algorithms for the same.

One is having large no. of memory accesses(load and store). The pattern is some time coalesced and some time non-coalesced. I am assuming a worst case where most of the access will result in cache failure. See sample Code snippet a).

Another is having large no. of calculations, roughly equivalent to the code snippet b) below.

How do I estimate power consumption in each case. Which one is more energy efficient and why?

Platform: I will be running these codes on Intel I3 processor, with Windows 7, with 4 GB DRAM, 3 MB Cache.

Note: I do not want to use any external power meter. Also please ignore if you find the code not doing any constructive job. This is because it is only fraction of the complete algorithm.

UPDATE:

It is difficult but not impossible. One can very well calculate the cost incurred in reading DRAMs and doing multiplications by an ALU of the CPU. The only thing is one must have required knowledge of electronics of DRAMS and CPU, which I am lacking at this point of time. At least in worst case I think this can very well be established. Worst case means no coalesced access, no compiler optimization.

If you can estimate the cost of accessing DRAM and doing a float multiplication , then why is it impossible for estimating the current, hence a rough idea of power during these operations? Also see me post, I am not asking how much power consumption is there, rather I am asking which code is consuming less/more power or which one is more energy efficient?

a) for(i=0; i<1000000; i++) { a[i]= b[i]; //a, b floats in RAM. { b) for(i=1; i<1000000; i++) { float j= j * i; //j has some value. which is used later in the program , not // shown here { 
Popularity: 6.0 Answer #11011583, count #1, created: 2012-06-13 09:08:22.0

Like commenters pointed out, try using a power meter. Estimating power usage even from raw assembly code is difficult if not impossible on modern superscalar architectures.

Answer #11011659, count #2, created: 2012-06-13 09:12:34.0

As long as only the CPU and memory are involved, you can assume power consumption is proportional to runtime.

This may not be 100% accurate, but as near as you can get without an actual measurement.

Answer #11011731, count #3, created: 2012-06-13 09:16:21.0

You can try using some CPU monitoring tools to see which algorithm heats your CPU more. It won't give you solid data but will show whether there is significant difference between these two algorithms in terms of power consumption.

Here I assume that the major power consumer is CPU and algorithms do not require heavy I/O.

Answer #11011750, count #4, created: 2012-06-13 09:17:05.0

To measure the actual power consumption you should use add an electricity meter to your power supply (remove the batteries if using a notebook).

Note that you will measure the power consumption of the entire system, so make sure to avoid nuisance parameters (any other system activity, i.e. anti-virus updates, graphical desktop environment, indexing services, (internal) hardware devices), perform measurements repeatedly, with and without your algorithms running to cancel out "background" consumption. If possible use an embedded system.


Concerning your algorithms, the actual energy efficiency depends not only on the C code but also on the performance of the compiler and also the runtime behavior in interaction with the surrounding system. However, here are some resources what you can do as developer to help on this:

Especially take a look on the paragraph Tools in above "Checklist", as it lists some tools that may help you on rough estimates (based on application profiling). It lists (besides others):

  • Perfmon
  • PwrTest/Windows Driver Kit
  • Windows Event Viewer (Timer tick change events, Microsoft-Windows-Kernel-PowerDiagnostic log)
  • Intel PowerInformer
  • Windows ETW (performance monitoring framework)
  • Intel Application Energy Toolkit
Answer #11061485, count #5, created: 2012-06-16 06:57:20.0

Well, I have done with my preliminary research and discussions with electronics majors.

A rough idea can be get by considering two factors:

1- Currents involved: more current, more power dissipation.

2- Power Dissipation due to clock rate. Power dissipation varies with the square of the frequency.

In Snippet a) the DRAMs and memories hardly take much current, so the power dissipation will be very small during each

 a[i]= b[i]; 

operation. The above operation is nothing but data read and write.

Also the clocking for memories is usually very small as compared to CPU. While CPU is clocked at 3 GHz, memory is clocked at about 133MHz or so. (Not all components are running at rated clock). Thus power dissipation is lower because of lower clock.

In snippet b), It can be seen that I am doing more calculations. This will involve more power dissipation because of several order higher clock frequency.

Another factor is multiplication itself would comprise of several order higher no. of cycles as compared to data read write (provided memory is coalesced).

Also, it would be great to have an option for measuring or getting a rough idea of power dissipation ("code energy" )for some code as shown below(the color represent how energy efficient your code is, Red being very poor, and green being highly energy efficient ):

enter image description here

In short given today's technologies it is not very difficult for softwares to estimate power like this (and possibly taking many other parameters, in addition to how I described above). This will be useful for faster development and evaluation of green algorithms.

Title: Can Android be woken up from sleep by a peripheral device? Id: 11016779, Count: 164 Tags: Answers: 1 AcceptedAnswer: 11016921 Created: 2012-06-13 14:14:26.0 Body:

Can Android be woken up from sleep by a peripheral device such as an arduino microcontroller?

Additional details: I am looking to save power by putting the Android device to sleep and having the low-power peripheral device wake up the Android device only when an "interesting event" occurs (e.g. abnormal sensor reading)

Popularity: 4.0 Answer #11016921, count #1, created: 2012-06-13 14:22:15.0

The device probably cannot be woken from true CPU sleep (which is a level beyond having the screen off) by ordinary software/signalling means.

However, enabling a 5v power source to the USB jack probably will wake up the majority of devices which can charge via USB.

If they are connected to an actual USB host (vs a simple power supply/charger) my suspicion is that they would not enter CPU sleep at all. Both the android accessory kit and IOIO schemes have the external microcontroller functioning as a USB host and providing power, so it's likely sleep would not be an issue. Or if you need to save power, you could probably make the externally sourced power switchable.

Two additional possibilities to consider are using a partial wakelock to prevent CPU sleep, or setting alarms to wake the device periodically so that background code can check for some event.

Title: Java Socket Connection is flooding network OR resulting in high ping Id: 11067070, Count: 165 Tags: Answers: 3 AcceptedAnswer: 11067346 Created: 2012-06-16 21:11:51.0 Body:

i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that...

Here is a server code snippet where i receive the clients data:

while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... 

Heres the client part, where i send my data in a while loop:

private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... 

when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately.

when i change the code to following:

while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... 

the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints?

what am i doing wrong??


Thanks for your Help!

Sincerly yours, maaft

Popularity: 7.0 Answer #11067346, count #1, created: 2012-06-16 21:57:43.0

The CPU-intensive version of your code is flooding the output stream with null values. They count as data to be transmitted. Although your server explicitly ignores them, they are helping to to eventually force any useful data through as well.

Your modified code with the peek is more reasonable. It is good form to call flush after writeObject. The written object could otherwise be stuck in output buffers and waiting for more items to come. Buffering is a performance optimization for cases where many objects are sent together. Flushing is not needed with stream classes that do not do buffering.

Even better:

Object item = sendQueue.poll(); if (item != null) { out.writeObject(item); out.flush(); // maybe not needed, depending on the class of your stream } 

This is slightly faster; there is no point in evaluating peek if you intend to poll immediately anyway.

Furthermore, call socket.setTcpNoDelay(true) on the socket before you pass it to the SocketOutputStream (assuming that is how you create the output streams). That disables the Nagle algorithm which may not necessarily be the best decision for conservation of network bandwidth, but it is a quick way to check that apart from tuning TCP send/receive buffers your client and server work correctly. If you have a direct connection to your server I would not worry about disabling Nagle algorithm at all.

Answer #11075011, count #2, created: 2012-06-17 21:40:54.0

You should use a blocking queue so that poll() blocks, rather than returning null. There is no point in sending the nulls at all, it's just a waste of everybody's time, bandwidth, and money.

Answer #11209486, count #3, created: 2012-06-26 14:22:25.0

Just as a further note, you might want to take a look at the ARO tool for Android that helps you to do optimization of your app including network usage. http://developer.att.com/developer/legalAgreementPage.jsp?passedItemId=9700312

Title: controlling CPU usage on android Id: 11224902, Count: 166 Tags: Answers: null AcceptedAnswer: null Created: 2012-06-27 11:11:51.0 Body:

I want to create an app which can control the cpu usage, Actually my main aim is to create power consumption vs cpu usage data for my android device.

In the power tutor app's paper it is written that they "create a CPU use controller, which controls the duty cycle of a computation-intensive task"

I dont know what is duty cycle here and which computation-intensive task they are talking about.

Please help.

power tutor paper

more information about power tutor could be found at powertutor.org

Popularity: 4.0 Title: How can we reduce the power consumption of our iOS game? Id: 11234335, Count: 167 Tags: Answers: 4 AcceptedAnswer: null Created: 2012-06-27 20:26:44.0 Body:

We just developed an iOS game and users have been complaining that it drains their battery power. It plays at 60 frames-per-second and uses a proprietary gaming engine (written in C#). May one of those be the issue or are there other common factors that should be investigated first?

Popularity: 8.0 Answer #11234387, count #1, created: 2012-06-27 20:31:26.0

Apple have some guidelines on reducing power consumption in their iOS programming guide

Good place to get started on some tips.

Answer #11234429, count #2, created: 2012-06-27 20:34:43.0

There may be one simple answer, try running your game at 30, or maybe even as low as 24 FPS. Is there any REAL reason you need to be running it SO fast???

I stated 24 by the way as it is "technically" the fastest your eyes (for the majority of human beings) can detect.

In video, we try to go higher because there are artifacts that can be seen from the recording process, but because games have generated scenes, generally you dont NEED to go higher than 24.

Answer #11234438, count #3, created: 2012-06-27 20:35:38.0

A good first step would be to reduce the frame-rate to 30 FPS. For any reasonable game, 60 FPS is overkill. At some point, your eyes just cannot tell the difference, unless the frame-rate is skipping. That rate occurs at about 24-30 FPS, and that's why It's most used for videos & gaming.

I would caution you though, if you do have a real-time game (especially one based on reflexes), that you do your game logic on another thread. If you do not, you could have flaw that other games have, for example:

Call Of Duty: Modern Warfare 3 has a major flaw in it's engine design in that the fire rate that your gun shoots is determined solely by the frame rate of the game, and not by a background thread.

This makes certain weapons more dangerous than others, because at a frame rate of about 60 FPS, they have an integer amount of time per shot.

So with that said, just try reducing your framerate. That alone is probably what's eating most of the battery.

Answer #11243543, count #4, created: 2012-06-28 11:24:10.0

Firstly, run the code through Instruments and see how it effects CPU usage (constant high CPU will drain the battery). Also, do you use any device features such as GPS or WIFI? These will drain the battery further.

Secondly, do you run any background processes when your app should suspend that might be eating away at battery?

You can keep track of any performance you enhance by checking device logs for power consumption, making a change and saving another log.

follow these instructions to accomplish this

Title: iOS chat APNS, sockets or time interval Id: 11271758, Count: 168 Tags: Answers: 2 AcceptedAnswer: 11290726 Created: 2012-06-30 05:37:35.0 Body:

I'm making a chat app for iPhone, but im not sure how conversation messages should come instantly.

I have read tons of Google results on this topic. Also the once on:
- http://www.raywenderlich.com/3932/how-to-create-a-socket-based-iphone-app-and-server
- http://www.raywenderlich.com/3443/apple-push-notification-services-tutorial-part-12

APNS approach:
An invisible notification will be pushed to the iPhone indicating that a new message is ready to be read. So the app will make a request for unread messages. So instead of manually polling new messages, I will let APNS help with that. But I'm not sure?

Sockets approach:
Making a socket connection that is open to share data. When new messages is found in db, it will automatic send the data to the app. But what about IP range, firewall, power consumption, other things? again I'm not sure :(

Polling approach:
Make a time interval where I poll request, power consumption is my enemy here.

My question:
- Which approach is best?
- Other suggestions?
- I really need some cons and pros from people with experience on this topic.

Examples is always good.
Thanks

Popularity: 29.0 Answer #11290683, count #1, created: 2012-07-02 09:05:01.0

Use XMPP SERVER for chat purpose

http://mobile.tutsplus.com/tutorials/iphone/building-a-jabber-client-for-ios-xmpp-integration/

I hope this link will be useful to you.

Answer #11290726, count #2, created: 2012-07-02 09:08:14.0

I think your main concern is how to receive new messages while your application is in the background. Because it's not like I'm going to have a messenger app all day open when I can get notifications with apps like WhatsApp.

Better than TCPIP sockets you could use websockets. Since it is HTTP there are no firewall problems, BUT that requires a permanent connection with application on the foreground which drains the battery.

And because only music, location, or voice ip, is allowed to run on the background, you can't poll unless the application is open. Note that if you register for those background tasks and do something else Apple will notice and reject your app.

Therefore, use APNS. That's what WhatsApp does.

Title: Running cpu intensive task in android device Id: 11334109, Count: 169 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-07-04 18:44:01.0 Body:

I am developing an app in android in which I need to monitor CPU usage vs power consumption. In order to do that I some thing weird:

I need to run some computationally expensive task, which can use as much CPU as possible.

I dont want that task to use any other component like LCD, audio, wifi etc.Just pure CPU.

If you guys have any answer to this weird sort of question please help... Thanks

Popularity: 4.0 Answer #11334148, count #1, created: 2012-07-04 18:49:01.0

There are several, like finding prime numbers by brute force, Ackermann Function, or Travelling Salesman for example.

Title: C++ - SDL: Limiting framerate issue Id: 11344721, Count: 170 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-07-05 12:50:54.0 Body:

Although the following code does some power saving, the FPS is not capped properly. When it is supposed to lock the framerate at 60 FPS, I get 82. Same for 30, I get 49 FPS.

Calculating FPS:

previousTime = currentTime; currentTime = SDL_GetTicks(); fps_++; if (currentTime - lastOutput >= 1000) { lastOutput = currentTime; fps = fps_; // the variable 'fps' is displayed fps_ = 0; } 

Limiting FPS:

if (currentTime - previousTime < 1000 / maxFPS) { SDL_Delay(1000 / maxFPS - currentTime + previousTime); } 

What did I mess up?

Popularity: 10.0 Answer #11379959, count #1, created: 2012-07-08 01:15:18.0

I'm defo not an expert, but you can try this!

SDL_Delay(1000 / maxFPS - SDL_GetTicks() + previousTime); 

Using a newly calculated current time might help

Title: How do I get the GPS status (on or off) in android 2.2 Id: 11356418, Count: 171 Tags: Answers: 0 AcceptedAnswer: null Created: 2012-07-06 05:37:29.0 Body:

i write the code below,but it's does't work,can anybody help me ? I just want to passively receiving gps status, rather than proactive inquiries. the save power is most important

there is no message output.

package com.sharelbs.lbs.service; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.util.Log; public class GPStStatusReceiver extends BroadcastReceiver { public static final String GPS_ENABLED_CHANGE_ACTION = "android.location.GPS_ENABLED_CHANGE"; @Override public void onReceive(Context context, Intent intent) { Log.d("----------------FFFFFFFFFFFF----------------","GPS Status onReceive"); if(intent.getAction().equals(GPS_ENABLED_CHANGE_ACTION)){ Log.d("----------------FFFFFFFFFFFF----------------","GPS Status Changed"); } } } 

there is my Manifest.xml:

<uses-permission android:name="android.permission.ACCESS_COARSE_UPDATES" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <receiver android:name=".service.GPStStatusReceiver"> <intent-filter> <action android:name="android.location.GPS_ENABLED_CHANGE" /> </intent-filter> </receiver> 
Popularity: 10.0 Title: Implementing an event and periodically driven "script language" in C++? Id: 11366329, Count: 172 Tags: Answers: 1 AcceptedAnswer: 11368576 Created: 2012-07-06 16:42:50.0 Body:

Background:
I want to create an automation framework in C++ where on the one hand "sensors" and "actors" and on the other "logic engines" can be connected to a "core".
The "sensors" and "actors" might be connected to the machine running the "core", but some might also be accessible via a field bus or via normal computer network. Some might work continuous or periodically (e.g. every 100 milliseconds a new value), others might work event driven (e.g. only when a switch is [de]activated a message will come with the new state).
The "logic engine" would be sort of pluggable into the core and e.g. consist out of embedded well known script languages (Perl, Python, Lua, ...). There will run different little scripts from the users that can subscribe to "sensors" and write to "actors".
The "core" would route the sensor/actor informations to the subscribed scripts and call them. Some just after the event occurred, others periodically as defined in a scheduler.

Additional requirements:

  • The systems ("server") running this automation application might also be quite small (500MHz x86 and 256 MB RAM) or if possible even tiny (OpenWRT based router) as power consumption is an issue
    => efficiency is important
    => multicore support not for the moment, but I'm sure it'll become important soon - so the design has to support it
  • Some sort of fail save mode has to be possible, e.g. two systems monitoring each other
  • application / framework will be GPL => all used libraries have to be compatible
  • the server would run Linux, but cross platform would be nice

The big question:
What is the best architecture for such a kind of application / framework?

My reasoning:
Not to reinvent the wheel I was wondering to use MPI to do all the event handling.
This would allow me to focus on the relevant stuff and not on the message handling, especially when two or more "servers" would work together (watchdog for each other as well as each having a few sensors and actors connected). Each sensor and actor handler as well as the logic engines themself would only be required to implement a predefined MPI based interface and thus be crash save. The core could restart each when it's not responsive anymore.

The additional questions:

  • Would that be even possible with MPI? (It'd be used a bit out of context...)
  • Would the overhead of MPI be too big? Should I just write it myself using sockets and threads?
  • Are there other libraries possible that are better suited in this case?
Popularity: 4.0 Answer #11368576, count #1, created: 2012-07-06 19:30:44.0

You should be able to construct your system using MPI, but I think MPI is too much focused on high performance computing. Moreover, since it was designed for C, it does not really fit the object oriented way of programming very much. IMO there are other approaches better suited for your needs:

  • Boost ASIO might be a good fit for designing your system. It includes both network functionality and helps at event-driven programming (which could be a good way to design your system). You can have a look at Think-Async webpage for some examples on using ASIO for event-driven programming.

  • You could also use plain threads and borrow the network capabilities from ASIO (without using the event-driven programming parts). If you can use C++11, then you can directly use std::thread and all the other functionality available (mutex, conditional variables, futures, etc.). If you cannot use C++11, you can always use Boost Thread.

  • Finally, if you really want to go for MPI, you can have a look at Boost MPI. At least you will have a much more C++ friendly way of using MPI.

Title: Constant FPS Android OpenGLES Id: 11381543, Count: 173 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-07-08 07:47:15.0 Body:

Hello android developers,

I am developing a simple game for Android in Eclipse using OpenGLES 1.0. I am using Samsung Galaxy S2 Android(2.3) as a device for development.

And I have a question about dual core and making frame rate constant.

So I have managed creating GLSurfaceView and override onDrawFrame() function where I call LogicUpdate(deltatime) function and Render() function.

Yes, all in single thread for now.

The problem I am getting is with dual core. If I disable dual core by going to Setting->Power saving and check System power saving I realize that rendering is automatically locked at 30 FPS. But if I enable dual core by unchecking System power saving I see that rendering is locked at 60 FPS but, phone gets hot and it drains battery really fast.

So the idea is keep my game run at 30 FPS to save some battery.

So to do this I use the code bellow.

Before I do logic update I call this peace of code, remember all this is done in onDrawFrame().

if( CONST_FPS > 0 && StartTime > 0 ) { ///////////////////////////////////////////////////////////////// // Get frame time //////////////////////////////////////////////////////////////// long endTime = System.currentTimeMillis(); long time = endTime - StartTime; // long wantedtime = 1000 / CONST_FPS; // long wait = 0; if( time < wantedtime ) { wait = wantedtime - time; // Thread.sleep(wait); } else { //Time to big game will slow down } } 

Where CONST_FPS = 30

And then

StartTime = System.currentTimeMillis(); // UpdateLogic(1.0 / CONST_FPS); Render(); 

Gameplay at 30 FPS is very smooth mainly because it does not need to lock FPS. BUT, when trying to lock 60FPS to 30 FPS I get stuttering. I did some research and found out that Thread.Sleep() is not precise. Is this true? What else can I do to make gameplay more smooth when locking 60FPS to 30FPS.

Thanks for the answer ...

Popularity: 24.0 Answer #11425491, count #1, created: 2012-07-11 04:20:26.0

You should use elapsed time to scale all your movement, so that the movement stays smooth at varying FPS rates. You can get elapsed time like this:

long currentTime = System.currentTimeMillis(); float elapsed = (System.currentTimeMillis() - lastFrameTime) * .001f;//convert ms to seconds lastFrameTime = currentTime; 

Then express your velocities in units per second, and update position like this:

sprite.x += (sprite.xspeed * elapsed); 
Answer #11789860, count #2, created: 2012-08-03 05:38:11.0
public void onSurfaceCreated(GL10 gl, EGLConfig config) { GLES20.glClearColor(0.1f, 0.1f, 0.1f, 1); GLES20.glEnable(GLES20.GL_DEPTH_TEST); GLES20.glDepthFunc(GLES20.GL_LEQUAL); GLES20Renderer.programLight = GLES20.glCreateProgram(); int vertexShaderLight = GLES20Renderer.loadShader(GLES20.GL_VERTEX_SHADER, GLES20Renderer.vertexShaderCodeLight); int fragmentShaderLight = GLES20Renderer.loadShader(GLES20.GL_FRAGMENT_SHADER, GLES20Renderer.fragmentShaderCodeLight); GLES20.glAttachShader(GLES20Renderer.programLight, vertexShaderLight); GLES20.glAttachShader(GLES20Renderer.programLight, fragmentShaderLight); GLES20.glLinkProgram(GLES20Renderer.programLight); System.gc(); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); float ratio = (float) width / height; Matrix.setLookAtM(GLES20Renderer.ViewMatrix, 0, 0, 0, 5f, 0, 0, 0, 0, 1, 0); Matrix.frustumM(GLES20Renderer.ProjectionMatrix, 0, -ratio, ratio, -1, 1, 2, 8); Matrix.setIdentityM(LightModelMatrix, 0); Matrix.translateM(LightModelMatrix, 0, -1.0f, 0.0f, 3f); Matrix.multiplyMV(LightPosInWorldSpace, 0, LightModelMatrix, 0, LightPosInModelSpace, 0); Matrix.multiplyMV(LightPosInEyeSpace, 0, ViewMatrix, 0, LightPosInWorldSpace, 0); System.gc(); } public void onDrawFrame(GL10 gl) { long deltaTime,startTime,endTime; startTime = SystemClock.uptimeMillis() % 1000; gl.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); synchronized (this) { updateModel(GLES20Renderer._xAngle, GLES20Renderer._yAngle, GLES20Renderer._zAngle); } renderModel(gl); endTime = SystemClock.uptimeMillis() % 1000; deltaTime = endTime - startTime; if (deltaTime < 30) { try { Thread.sleep(30 - deltaTime); } catch (InterruptedException e) { //e.printStackTrace(); } } System.gc(); } 

ATTENTION:

  1. Never use sleep in the UI thread; it is very very very bad
  2. Use statics: although this is bad OOP practice, it is useful here
  3. Use vertexbuffer objects
  4. Initialize in onsurfacechanged and fetch locations (glget__location) here too
  5. It is not your fault always because Froyo garbage collector is bad and then on Gingerbread the touch drivers fire too many events (Google this) so gradually switch to other input methods for your app/game that does not use touch like Android sensors
  6. Upgrade to Gingerbread
Title: Is it possible to configure an Android install to have a quick boot time? (<5 seconds) Id: 11401458, Count: 174 Tags: Answers: null AcceptedAnswer: null Created: 2012-07-09 19:07:55.0 Body:

I am looking at Android for a project (in car entertainment*) in which power consumption, especially when not in use, is a great concern, but the environment is tightly controlled and predictable.

The problem however is that Android has no hibernate mode, and is pretty liberal with allowing Apps processor cycles in standby making it hard to gauge power consumption when the device is not in use and so I would like to shut it down entirely when not needed, which means it needs to boot fast.

I know many Linux variants have achieved very quick boot times and less than 10 seconds on some could be considered standard. I have also read about Androids long boot times and it seems a lot of the delays in loading, like on any OS, could be considered optional?

For example, the presentation states that

"Android can boot without preloading any classes"

and that this

"can result in bad application load times and memory usage later"

But this is not a concern as long as it is deterministic - if you can find which classes an MP3 player required for example, and turned off all the others and gained 10 seconds it doesn't matter that other apps would take 20 seconds to load because it will never load them.

The same thing goes for the network stack which wouldn't be needed, and for many of the packages, certificate checking, etc.

I know 50 seconds to 5 seconds is a very tall order, but is there any reason it is not doable?
Has anyone attempted something like it before? Is Android customizable enough to allow this?

If Android were to be "streamlined" enough, could it boot in 5 seconds?

EDIT: The hardware this would be targetting would be 'embedded PC level': think http://store.tinygreenpc.com/tiny-green-pcs/trim-slice/h-diskless.html

EDIT: I am also aware of Ubiquitous QuickBoot which though highly impressive is most definitely out of my price range!

(*I like Android over 'standard' Linux distros for this because the entire UI design and ecosystem has been geared around simplicity and portability which makes it perfect for this.)

Popularity: 7.0 Title: Linux universal WiFi driver for managing power Id: 11736725, Count: 175 Tags: Answers: null AcceptedAnswer: null Created: 2012-07-31 09:11:42.0 Body:

I know it's some long, but I can't shorten it.

There is one general problem on many Linux laptops - it's managing the power of WiFi devices. There are some projects that are voted to solve it like: Wireless Extensions, cfg80211, mac80211 that try to achieve it. And there is special subsystem in Linux since 2.6 kernel for controlling wireless devices - /dev/rfkill

But no one of this approaches cannot manage power consumption of WiFi cards everywhere. For example, I, as owner of Samsung laptop, have found ready solution that works only for Samsung laptops (but on the developer's page it's possible to find out that he had problems even with some models among Samsung) ( that driver).

I looked into code and I found that actually it's module which is loaded into kernel and controls WiFi devices using DMA.

I wonder about problems that impede creation of one universal driver for controlling power management. I see following problems:

1.Different WiFi adapters ( actually this problem is not a problem with different producers of laptops, but rather a problem with different adaptors. And one producer tends to use the same WiFi adaptor).

2.Different WiFi adapters can have different I/O ports in memory and it's impossible hard to find in memory every driver.

3.Big projects make their target to give a full support of WiFi adaptors, perhaps it's the reason why they forget about power controlling feature (as for it's much simpler to implement this feature of controlling the power than transmission or WPA encryption features).

I am sure, I don't see zillion of problems that hide here.

What kind of problems I would meet, if I had tried to realise it?

Popularity: 3.0 Title: Cost sensitive folds Id: 11766365, Count: 176 Tags: Answers: 4 AcceptedAnswer: 11767047 Created: 2012-08-01 19:40:53.0 Body:

Let me explain what I mean by a cost-sensitive fold with an example: calculating pi with arbitrary precision. We can use the Leibniz formula (not very efficient, but nice and simple) and lazy lists like this:

pi = foldr1 (+) [(fromIntegral $ 4*(-1)^i)/(fromIntegral $ 2*i+1) | i<-[0..]] 

Now, obviously this computation will never complete because we must compute every value in the infinite list. But in practice, I don't need the exact value of pi, I just need it to some specified number of decimal places. I could define pi' like this:

pi' n = foldr1 (+) [(fromIntegral $ 4*(-1)^i)/(fromIntegral $ 2*i+1) | i<-[0..n]] 

but it's not at all obvious what value for n I need to pass in to get the precision I want. What I need is some sort of cost-sensitive fold, that will stop folding whenever I achieve the required accuracy. Does such a fold exist?

(Note that in this case it is easy to see if we've achieved the required accuracy. Because the Leibniz formula uses a sequence that alternates sign with each term, the error will always be less than the absolute value of the next term in the sequence.)

Edit: It would be really cool to have cost-sensitive folds that could also consider computation time/power consumption. For example, I want the most accurate value of pi given that I have 1 hour of computation time and 10kW-hrs to spend. But I realize this would no longer be strictly functional.

Popularity: 13.0 Answer #11766986, count #1, created: 2012-08-01 20:21:43.0

The Haskell way to do this is to produce an infinite list of ever-more-accurate answers, then reach in and grab the one with the right accuracy.

import Data.List (findIndex) pis = scanl (+) 0 [4*(-1)**i/(2*i+1) | i <- [0..]] accuracies = zipWith (\x y -> abs (x-y)) pis (tail pis) piToWithin epsilon = case findIndex (<epsilon) accuracies of Just n -> pis !! n Nothing -> error "Wow, a non-terminating loop terminated!" 
Answer #11767047, count #2, created: 2012-08-01 20:26:27.0

My recommendation is to use a scan instead of a fold. Then traverse the resulting list, until you find the precision you want. A useful special case of the left scan (scanl) is the iterate function:

piList :: [Double] piList = map (4*) . scanl (+) 0 . map recip . iterate (\x -> -(x + 2 * signum x)) $ 1 

You can now traverse this list. For example you might check when the change to a certain precision becomes invisible:

findPrec :: (Num a, Ord a) => a -> [a] -> Maybe a findPrec p (x0:x1:xs) | abs (x1 - x0) <= p = Just x0 | otherwise = findPrec p (x1:xs) findPrec _ _ = Nothing 
Answer #11767131, count #3, created: 2012-08-01 20:32:36.0

In general case the fold you ask does not exist. You have to provide accuracy estimate yourself. It may be problem in general, but all practically useful sequences do have a reasonable upper estimate for numerical accuracy of partial sums, usually obtained by someone else. However, I should encourage you to read relevant textbooks, such as numerical analysis textbooks, that usually have part about estimating sum of infinite numerical sequence and giving upper estimate to it.

There is, however, a general rule, that if numerical process has limit, then numerical shifts come toward zero as rough geometrical progression, so if two subsequent shifts are 1.5 and 1.0, then the following shift would be somewhere about 0.6 and so on (it is better to accumulate such estimate over several last members list, not only 2 members). Using this rule and equation for sum of geometrical progression, you usually can find a reasonable estimate for numerical accuracy. Note: this is empirical rule (it has name, but I forgot it), not a strict theorem.

Additionally, representation of IEEE Double/Float has limited accuracy and at some point addition of small numbers from tail of sequence will not change computed partial sum. You are encouraged to read about floating point representation in x86 for this case, you may find your fold.

Summary: there is no solution in general, but usually there are reasonable estimates in practice for most useful sequences, usually obtained from literature for each sequence type or numerical limitations of hardware

Answer #11782862, count #4, created: 2012-08-02 17:37:28.0

Some good examples of what Daniel Wagner suggests above may be found in the paper Why Functional Programming Matters

Specific examples from the paper are: iterative root-finding, numeric differentiation and numeric integration.

Title: How to measure ARM power consumption? Id: 11790833, Count: 177 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-08-03 07:01:24.0 Body:

I'm trying to measure instaneous power comsumption by processor on ARM-Cortex-A9/Ubuntu 12.04 platform.

Does anyone know how to do this?

Popularity: 5.0 Answer #11790880, count #1, created: 2012-08-03 07:04:35.0

I think it mainly depends on your processor or SOC manufacturer. ARM defines processor core, manufacturer defines everything around it (like peripherials etc.).

Also when Ubuntu is ported on your platform, maybe there just be some power measuring application which also supports that platform.

Answer #11819531, count #2, created: 2012-08-05 19:45:40.0

There are 4 obvious approaches to this:

  • Estimate it from other measurable parameters (e.g. CPU load)
  • Measuring current sense resistors in the on-board power supplies
  • Measuring entire-system power draw using an external supply with some kind of data-logging [a low value resistor and a voltmeter can also be used]
  • [If measuring power draw by a certain application] run the code on some other device that does have this functionality. [Apple's dev-tools and iOS provide incredible levels of support for this. Also fantastic for profiling too].

Since you're using the OMAP4460 (Pandaboard per chance?) it'll probably be paired with the TWL6030 power supply IC. A quick look at the datasheet suggests that it's capable of measuring current draw when running from battery (this is how the battery level indicator is implemented). There will be driver support for this. The OMAP4430 (and probably by extension 4460) doesn't have power supply monitoring of its own.

Might also be worth looking on TI's website for white-papers. This is a common enough thing to do.

Title: How much power consume by GPS application? Id: 11805041, Count: 178 Tags: Answers: 1 AcceptedAnswer: 11805498 Created: 2012-08-04 00:55:11.0 Body:

Hey I want to know per minute power consumption by gps location application for android in Samsung galaxy II. when application is running and also when application is running in background? Is there any method that I can test it if my application frequently provide location update and send it to the server. Also I want to know in above scenario what is the good time interval that I set for MINIMUM_TIME_BETWEEN_UPDATES for requestLocationUpdates of LocationManager.

Popularity: 5.0 Answer #11805498, count #1, created: 2012-08-04 02:42:15.0

Perhaps this app would help. It seems to specifically have a measurement for how much power the GPS is using. If you turn off other apps using GPS, you might be able to get a good handle on the battery consumption.

http://gigaom.com/mobile/android-power-consumption-app/

Title: iCloud incremental data Id: 11894108, Count: 179 Tags: Answers: 1 AcceptedAnswer: 11896321 Created: 2012-08-10 01:01:23.0 Body:

Reading the iCloud design docs, it mentions:

Because the system tracks changes to the document, it is able to upload only the parts that changed, as shown in step 2. This optimization reduces iCloud network traffic and also reduces the amount of power consumed by the device—important for battery-based devices.

In my scenario, I have a plist file that a UIDocument tracks. What if I replace the plist file with a copy of same plist, same filename, same path? I know that the metadata gets updated, but does the entire file get transferred over to iCloud again?

Popularity: 7.0 Answer #11896321, count #1, created: 2012-08-10 06:06:21.0

iCloud does not track based on UID, it tracks on filename. Your file will be diffed and only the changes will be sent to iCloud.

If you want UID tracking, Apple recommends that you add a UID and document schema version to your file formats (as they do).

Title: How to get the type of connected monitor(s) on Windows XP? Id: 11943099, Count: 180 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-08-13 22:14:10.0 Body:

I need to know the type of the monitor(s) -- i.e. flat screen, CRT, etc -- that is used by the computer from a C++ program. The main requirement is for this code to work on Windows XP SP3 (because otherwise it's almost a given that the system runs on an LCD screen.)

I need it to implement screen dimming to save energy, which will work only on CRTs and have an opposite effect on flat screens.

Any idea how to do this?

Popularity: 13.0 Answer #11988494, count #1, created: 2012-08-16 13:42:33.0

You can get the most of the Information of Monitor by using the GetMonitorInfo function in Win32 API

BOOL GetMonitorInfo( __in HMONITOR hMonitor, __out LPMONITORINFO lpmi ); 

This will give the output to the structure MONITORINFO or MONITORINFOEX where u can extract the information regarding the Current Attached Monitor

Title: Getting power consumption for each process Id: 11949778, Count: 181 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-08-14 09:51:02.0 Body:

Possible Duplicate:
Get Battery Usage By Process

Is there any way to find how much power is consumed by each processes and displaying in a list using c#?

Popularity: 6.0 Answer #11949919, count #1, created: 2012-08-14 10:00:35.0

This is a rather difficult task to do, there is generally no standard hardware support for it in a normal computer.

If you are just after finding out how much power your PC is consuming I suggest using a wall plug power monitor: http://www.amazon.com/P3-International-P4400-Electricity-Monitor/dp/B00009MDBU

Another tip could be that if you are using a UPS, you can sometimes get the power consumption using a vendor specific API.

Here is also a thread on msdn discussing how to get the general power consumption through C#, even this is difficult and there are no clear answers given. http://social.msdn.microsoft.com/Forums/eu/csharplanguage/thread/1eb2fc16-ceca-4984-acf8-cf81185d528f

Answer #11949922, count #2, created: 2012-08-14 10:00:58.0

I've encountered various models but I'm not sure what you mean by power.

Which of the following do you want to consider?

  • CPU
  • Memory
  • Disk IO

For charging processes on mainframes for example, they use pure CPU power (MIPS)

For charging on large databases, a function of CPU & disk IO is used (derived from the execution plan)

Title: android monitoring apps Id: 11977497, Count: 182 Tags: Answers: 3 AcceptedAnswer: 12163379 Created: 2012-08-15 21:19:14.0 Body:

I would like to create an Android application with real-time monitoring functions. One monitoring function is to audit the audio flow. The other function is to interact with a peripheral sensor. These monitoring functions can be triggered by others. Besides, in order to save power consumption, the audio function will be running in a polling mode, i.e. sleep for a certain amount of time and wake for a certain amount of time.

I am considering how to design the Android application.

  • Whether to design the audio function as a Service or an Activity? The problem is if it is designed as an Activity, the audio function will be off if screen turns off after a period of time.

  • How to design the polling function? Use an AlarmManager or a inner-thread with Timer?

My goal is to save the power consumption as much as possible. Thanks.

Popularity: 19.0 Answer #12163379, count #1, created: 2012-08-28 16:13:20.0

I would recommend following

a) Use a Service. Activity is short lived entity (it works only while it's on the screen)

b) Make the service foreground (read this: http://developer.android.com/reference/android/app/Service.html#startForeground(int, android.app.Notification). This will decrease the chance that system will kill your service

c) In the service, start a thread and do everything you need in the thread.

d) If you want execute periodically, just do Thread.sleep() in the thread (when Thread sleeps it doesn't consume CPU cycles).

I believe c) and d) is preferable to AlarmManager. Here is piece from documentation (http://developer.android.com/reference/android/app/AlarmManager.html) : "Note: The Alarm Manager is intended for cases where you want to have your application code run at a specific time, even if your application is not currently running. For normal timing operations (ticks, timeouts, etc) it is easier and much more efficient to use Handler."

Since your application running it's better to have some permanently running thread and execute something on it. Generally speaking Handler, HandlerThread, MessageQueue are just convenience classes for more complex message handling and scheduling. It looks like your case is quite simple and usual Thread should be enough.

Answer #12379918, count #2, created: 2012-09-12 00:59:07.0

Concurring with Victor, you definitely want to use a Service, and pin it into memory by calling startForeground()

However I suggest you look into utilizing the built in system Handler ; place your functionality in a Runnable and call mhandler.postDelayed(myRunnable, <some point in future>) ; this will allow the android framework to make the most of power management.

Answer #12406543, count #3, created: 2012-09-13 12:41:45.0

That's a service.

And you may want some extra robustness: the service can be killed and NOT restarted later, even being a foreground service. That will stop your monitoring.

Start your service from the UI. If you want the service to survive device reboot, also start it from a BroadcastReceiver for android.intent.action.BOOT_COMPLETED.

Create a thread in the service as described in other answers here.

Additionally, use Alarm Manager to periodically start your service again. Multiple startService() calls are OK. If already running, the service will keep running. But if it's been forgotten by the system, say, after a series of low resource conditions, it will be restarted now.

Schedule those alarms responsibly: to be a good citizen, set the absolutely minimal frequency. After all, Android had some good reasons to kill the service.

With some services, even more steps may be needed, but in this case this approach seems to be sufficient.

Title: What's the power consumption for Activity and Service respectively? Id: 11979975, Count: 183 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-08-16 02:35:58.0 Body:

What's the power consumption for Activity and Service, if other conditions are the same, i.e. Activity and Service are performing exact the same tasks? How can I optimize the power consumption for Android application?

Popularity: 4.0 Answer #11980253, count #1, created: 2012-08-16 03:16:32.0

The differences between an Activity and a Service have nothing to do with power consumption. You should consider them equal from a power standpoint. The important points to grasp about an Activity vs a Service has everything to do with your application design and what the life cycles of your logic are. The actual code you are running will determine power consumption.

Note that some people assume that a Service equates to "long-running" or "always-on." This does not apply to the Android platform. I would recommend keeping your life cycles as short as possible, especially if you're concerned about power.

Title: Storing local Cell-id to lat/long database, to accommodate the same range for the new offline google maps? Id: 11987643, Count: 184 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-08-16 12:57:16.0 Body:

I have read many q and a's about this subject, but not quite what I am asking. Is there a way to store the cell-id to latitude - longitude database for the local area only, like the google maps new offline feature, could you get the localized information and store it, to cover the same ground as the map?, I am looking to make an android app which works offline, which is power efficient, and this is the best option that I have thought of. I have checked opencellid, and the kml file it provides, would that be enough to find the location without an internet connection or gps?

Popularity: 4.0 Answer #18169067, count #1, created: 2013-08-11 05:30:34.0

You could explore the offline database option provided by LocationAPI.org to use for your app. Storing 36 million+ cells offline would be a task though!

Title: Does usage of black backgrounds on webpages really helps in saving energy? Id: 12037649, Count: 185 Tags: Answers: 1 AcceptedAnswer: 12037680 Created: 2012-08-20 12:28:52.0 Body:

Does the use of a black background like Blackle proclaims, really save energy for Google searches?

Popularity: 0.0 Answer #12037680, count #1, created: 2012-08-20 12:30:24.0

Only on CRTs. LCDs have a backlight which is always on, even if black is being shown.

Blackle themselves show this table:

Variation in Monitor On Power for selected Monitors (conclusion is LCD is not important, CRT makes a difference if up to 188% more power usage for white rather than black. Minimum difference is 118% more.

Title: What is the power efficient way to keep offline data in sync with the server without GCM? Id: 12120629, Count: 186 Tags: Answers: 3 AcceptedAnswer: 12120688 Created: 2012-08-25 09:02:46.0 Body:

I'm building an Android client for an Internet discussion board: the app downloads the discussions from the server and displays them using the native Android UI. It was quite easy to build the basics such as getting and displaying the content, and posting the replies back to the server.

Now I want to bring it to the next level: the app should store all the data locally on the device and sync it with the server periodically, getting the recent changes and updating the local DB. I don't want it to check for the changes on demand; the periodic updates are better because this allows some nice features like subscribing to the updates.

Unfortunately the server is not GCM-compliant (and it will never be), it is a good old simple web server so I have to implement the sync myself.

I've found a comment to another question where it's said that a timer-based check is a bad idea because the device will have to wake up and connect to the Internet. It would be much better to catch when the device begins its own data sync, but its there a way to handle this without a perioic check?

I've looked over many discussions on this issue; most of them discuss the ContentProviders, protocols, services like GCM/C2DM and so on. I've found nothing about the power efficiency.

So how to do the sync properly so my app wouldn't drain the battery?

Popularity: 14.0 Answer #12120688, count #1, created: 2012-08-25 09:13:08.0

it would be better if you use GCM as the server can push updates if available that would be power efficient than polling as network will only be used when updates are available its far better than timely polling as it will check and wake the phone just to check for updates

Important: C2DM has been officially deprecated as of June 26, 2012. This means that C2DM has stopped accepting new users and quota requests. No new features will be added to C2DM. However, apps using C2DM will continue to work. Existing C2DM developers are encouraged to migrate to the new version of C2DM, called Google Cloud Messaging for Android (GCM). See the C2DM-to-GCM Migration document for more information. Developers must use GCM for new development.

but as you cant get with GCM you will have to go for polling itself you can use it in a power efficiant way by using alarm manager and inexact repeating

i think that is the best power efficient way to poll periodically

giving a sample code

public class MyScheduleReceiver extends BroadcastReceiver { // Restart service every 30 sec private static final long REPEAT_TIME = 1000 * 30 ; @Override public void onReceive(Context context, Intent intent) { AlarmManager service = (AlarmManager) context .getSystemService(Context.ALARM_SERVICE); Intent i = new Intent(context, MyStartServiceReceiver.class); PendingIntent pending = PendingIntent.getBroadcast(context, 0, i, PendingIntent.FLAG_CANCEL_CURRENT); Calendar cal = Calendar.getInstance(); // Start 30 seconds after boot completed cal.add(Calendar.SECOND, 30); // // Fetch every 30 seconds // InexactRepeating allows Android to optimize the energy consumption service.setInexactRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), REPEAT_TIME, pending); // service.setRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), // REPEAT_TIME, pending); } } 

(There is a more detailed explanation that includes the necessary manifest items.)

Answer #12120942, count #2, created: 2012-08-25 09:53:28.0

you can build a small server side app that will do the periodic polling and utilize GCM to notify the Android client for any updates.

ofcourse that you'll have other problems, such as the need for polling for each one of your users.

Answer #12121057, count #3, created: 2012-08-25 10:08:17.0

A good idea would be with GCM, but as you said, that's not possible for you. In such a case, this is what I recommend:

  1. Update over WiFi if possible. This way you're likely to get the data transfer done more quickly, and hence reduce the amount of time required for the device's radios to be active.
  2. Bunch transfers together. Don't transfer say 1 file, and then wait for a bit to do another transfer. Instead, transfer right after the other. This reduces the amount of time the radios need to be active for, and hence conserves battery life
  3. Update when charging, and update more. If the device is charging, you can keep the network connection alive for longer without killing the battery. So you could sync maybe 3 days of data instead of 24 hours of data while on charging, and save battery for when the user wants that 2 days of data that would have not been synced normally.

You could also watch this session from IO 2012 on efficiency when using networks.

Title: Google Cloud Messaging and less battery drain Id: 12124752, Count: 187 Tags: Answers: 1 AcceptedAnswer: 12124927 Created: 2012-08-25 18:50:50.0 Body:

I've heard that GCM provides less energy consumption.How does GCM provide higher battery life exactly? What is the difference with GCM?

It is said that server sends a message to app when there is something to get.So app doesnt have to check the server everytime.

But now, app is still have to check for that message?

Popularity: 11.0 Answer #12124927, count #1, created: 2012-08-25 19:19:12.0

The way it works is that without GCM, your app would either have to have a socket open to your server and ping it every 5 minutes or so in order to keep the socket alive.. Another option would be to make an HTTP call to your server every 5 minutes or so to see if there's anything new to fetch..

With GCM, there's one unified process that's already running to get messages. Now your app just subscribes with a broadcast listener, then if on that GCM process there's ever a message for your app, it would be broadcasted to your app, and at that stage you can whatever you want.

This conserves battery because you are just piggybacking on the already existing GCM service on the OS instead of running your own, thus using less battery.

Title: minimum power required algorithm Id: 12128056, Count: 188 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-08-26 06:21:40.0 Body:

You are given a logic circuit that can be modeled as a rooted tree-the leaves are the primary inputs, the internal nodes are the gates, and the root is the single output of the circuit. Each gate can be powered by a high or low supply voltage. A gate powered by a lower supply voltage consumes less power but has a weaker output signal. You want to minimize power while ensuring that the circuit is reliable. To ensure reliability, you should not have a gate powered by a low supply voltage drive another gate powered by a low supply voltage. All gates consume 1 nanowatt when connected to the low supply voltage and 2 nanowatts when connected to the high supply voltage.

Design an efficient algorithm that takes as input a logic circuit and selects supply voltages for each gate to minimize power consumption while ensuring reliable operation.

In this question what I think is that, it can solve solve by using greedy or Dynamic. But I am confused from where I can start this problem to think. Please help.

Popularity: 7.0 Answer #12128575, count #1, created: 2012-08-26 08:08:24.0

From the requirement "you should not have a gate powered by a low supply voltage drive another gate powered by a low supply voltage", we get that our task is to find a maximal independent set in the tree (minus the leaves maybe, I don't know if they are considered to be powered or not).

While the problem is NP-hard for general graphs, it can be solved quickly and efficiently for trees. You can read this simple 3-page article for the details.

Answer #12130825, count #2, created: 2012-08-26 14:15:39.0

You need to find a maximum independent set of the tree, while the points belong to the independent set have low supply voltage. Besides dynamic programming, there is a very simple linear greedy algorithm:

  1. Choosing all the leaves (the gates which are not drove by other gates) as low voltage.
  2. Delete all the leaves and their direct fathers.
  3. Now some internal nodes become the new leaves. Repeat to 1 until all the nodes are processed.
Title: Power efficient video streaming from an Android device Id: 12165519, Count: 189 Tags: Answers: 1 AcceptedAnswer: 12166434 Created: 2012-08-28 18:45:22.0 Body:

I'm doing some experiments with video streaming from and the front camera of the android device to a local server. Currently I plan to use WiFi. I may move to Bluetooth 4.0 in the future.

I'm looking for insights, experience and DOs and DON'Ts and other ideas that I should consider in relation to protocol options (TCP, UDP, ...? ) and video codec. The image quality should be good enough to run computer vision algorithms such as face and object detection, recognition and tracking on the server side. The biggest concern is power. I want to make sure that the streaming is as power efficient as possible. I understand more power efficiency means a lower frame rate.

Also, I need to way to just send the video frames without displaying them directly on the screen.

Thanks.

Popularity: 6.0 Answer #12166434, count #1, created: 2012-08-28 19:44:30.0

You didn't mention whether you will be doing encoding or decoding on the device.

Some tips: UDP will be less power hungry in general especially under deteriorating network conditions: See http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.134.5517&rep=rep1&type=pdf Check for more papers on this on google

In terms of codecs in general you can say the order is H264 > MPEG4 > H.263 in terms of power needed for both encoding and decoding.

Higher the bitrate more the power needed for decoding but the codec difference is a bigger difference than the bitrate one. I say this because to get same quality as a H.264 stream with H.263 you need higher bitrate. But h.263 at that bitrate should consume lesser power than H.264 at the lower bitrate. So do not apply it cross codec. Just at the codec chosen use the lowest bitrate/framerate you can.

In encoding though very low bitrates can make the encoder work harder so increase power consumption. So encoding bitrates should be low, but not so low that the encoder is streched. This means choosing a reasonable bitrate which does not produce a continous blocky stream but gives a decent stream output.

Within each codec if you can control the encoding then you can also control the decoding power. The following applies to both: i.e. Deblocking, B Pictures will add to power requirements. Keeping to lower profiles [Baseline for H.264, Simple Profile for MPEG4 and Baseline for H.263] will result in lesser power requirements in encoding and decoding. In MPEG4 switch off 4MV support if you can. Makes streams even simpler to decode. Remember each of these also have a quality impact so you have to find what is acceptable quality.

Also unless you can really measure the power consumption I am not sure you need very fine tweaking of the toolsets. Just sticking to the lower profiles should suffice.

Worse the video quality during capture more the power needed during encoding. So bright lighted videos need lesser effort to encode, low light videos need more power.

There is no need to send videos to a screen. you receive video over a socket and do whatever you want to do with that data. That is upto you. You do not have to decode and display it.

EDIT: Adding a few more things I could think off

In general the choice of codec and its profile will be the biggest thing affecting a video encoding/decoding system's power consumption.

The biggest difference may come from the device configuration. If you have hardware accelerators for a particular codec in the device it may be cheaper to use those than software codec for another one. So though H.264 may require more power than MPEG4 when both are in software, if the device has H.264 in hardware then it may be cheaper than MPEG4 in software. So check you device hardware capability.

Also video resolution matters. Smaller videos are cheaper to encode. You can clock your device at lower speeds when running smaller resolutions.

Title: What is the difference between androidpn connection to server and normal socket connection? Id: 12175067, Count: 190 Tags: Answers: 1 AcceptedAnswer: 12175436 Created: 2012-08-29 09:45:33.0 Body:

As I know a long connection is power-consumed on mobile device, then androidpn is the same as normal connection on power consumption? So androidpn push only saves data flow instead of power?

Popularity: 3.0 Answer #12175436, count #1, created: 2012-08-29 10:03:18.0

Not all implementations that provide a service based on XMPP for push notifications have the same battery drain on mobile.

In this response, you have an estimate of how much battery can consume a persistent TCP connection: Does Android support near real time push notification

For specific details would have to revise androidipn client code.

regards

Title: How to reduce the Wi-Fi power consumption for iOS devices Id: 12296082, Count: 191 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-09-06 08:34:57.0 Body:

the WiFi router is in the same room. No needs to have 100mW on iOS device because it "kills" battery very much, so 4mW-10mW is really enough -> it helps to "be online" much often. Is there any "tweak" or software that can set up WiFi power consumption?

Popularity: 3.0 Answer #12296134, count #1, created: 2012-09-06 08:38:47.0

Most likely, yes.

But, it will definitely be counted as "homebrew" and not supported by Apple.

Title: OpenCL for GPU vs. FPGA Id: 12332609, Count: 192 Tags: Answers: 2 AcceptedAnswer: 12353804 Created: 2012-09-08 16:45:14.0 Body:

I read recently about OpenCL/CUDA for FPGA vs. GPU As I understood FPGA wins in power criteria. The explanation for that ,I`ve found in some article:

Reconfigurable devices can have much lower power consumption from peak values since only configured portions of the chip are active

Based on said above I have a question - does it mean that ,if some CU [Compute Unit] doen`t execute any work-item,it still consumes power? (and if yes - what for it consumes power?)

Popularity: 15.0 Answer #12334190, count #1, created: 2012-09-08 20:27:25.0

As always, it depends on the workload. For workloads that are well-supported by native GPU hardware (e.g. floating point, texture filtering), I doubt an FPGA can compete. Anecdotally, I've heard about image processing workloads where FPGAs are competitive or better. That makes sense, since GPUs are not optimized to operate on small integers. (For that reason, GPUs often are uncompetitive with CPUs running SSE2-optimized image processing code.)

As for power consumption, for GPUs, suitable workloads generally keep all the execution units busy, so it's a bit of an all-or-nothing proposition.

Answer #12353804, count #2, created: 2012-09-10 14:29:37.0

Yes, idle circuitry still consumes power. It doesn't consume as much, but it still consumes some. The reason for this is down to how transistors work, and how CMOS logic gates consume power.

Classically, CMOS logic (the type on all modern chips) only consumes power when it switches state. This made is very low power when compared to the technologies that came before it which consumed power all the time. Even so, every time a clock edge occurs, some logic changes state even if there's no work to do. The higher the clock rate, the more power used. GPUs tend to have high clock rates so they can do lots of work; FPGAs tend to have low clock rates. That's the first effect, but it can be mitigated by not clocking circuits that have no work to do (called 'clock gating')

As the size of transistors became smaller and smaller amount of power used when switching became smaller, but other effects (known as leakage) became more significant. Now we're at a point where the leakage power is very significant, and it's multiplied up by the number of gates you have in a design. Complex designs have high leakage power; Simple designs have low leakage power (in very basic terms). This is a second effect.

Hence, for a simple task it may be more power efficient to have a small dedicated low speed FPGA rather than a large complex, but high speed / general purpose CPU/GPU.

Title: Saving battery in ios for required location updates in background Id: 12338615, Count: 193 Tags: Answers: null AcceptedAnswer: null Created: 2012-09-09 11:16:43.0 Body:

I've an app that needs to constantly update the location (also in background) to a server. This works fine, the app continues normally in background.

Now, I do not need a very precise update in every situation. However significant location changes update (as recommended by Apple) is not precise enough at all. Due to my requirements the battery consumption is quite high. Currently I use

locationManager.desiredAccuracy = kCLLocationAccuracyBest; 

Does is make any difference in power consumption if I switch in between say kCLLocationAccuracyBest and say kCLLocationAccuracyHundredMeters? Because the kCLLocationAccuracyBest is only required for certain tasks the app performs and for the rest an accuracy of kCLLocationAccuracyHundredMeters would do the job.

Popularity: 0.0 Title: How to implement in code to save battery life but keep iphone application live? Id: 12352325, Count: 194 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-09-10 13:02:14.0 Body:

I have an iPhone application which does audio recording. What I want to achieve is to keep the application alive to do continuos recording but at the same time save battery life i.e. save power consumption

Just a note, I have used following code to keep application alive.

[[UIApplication sharedApplication] setIdleTimerDisabled:YES]; 
Popularity: 3.0 Answer #12352462, count #1, created: 2012-09-10 13:11:21.0

I don't think there is any way to instantly 'save battery life'. Only thing you can probably do is minimize read/write operations, internet access etc. This you have to manually do in code.

Title: about CONFIG_NO_HZ in kernel Id: 12373004, Count: 195 Tags: Answers: 1 AcceptedAnswer: 12374753 Created: 2012-09-11 15:21:01.0 Body:

So if CONFIG_NO_HZ is set, I believe it will make a tickless kernel. But I believe this just means when the system is idle, it might become tickless in order to save energy. When it's working, it is still tick kernel, right? Thanks:>

Popularity: 14.0 Answer #12374753, count #1, created: 2012-09-11 17:17:33.0

Essentially, yes.

There are ongoing projects to make the periodic tick go away also when not idle, but that's a lot of work with many changes, and it's unclear whether it will ever be completed.

Title: efficient gps service in android Id: 12510902, Count: 196 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-09-20 10:38:27.0 Body:

I searched here for an idea to build an efficient location search that runs in a backround service. I found this link Energy efficient GPS tracking but it's not what i'm looking for. i'm looking for a simple solution that will check the distance in meters and then will know if the user is close or far from the target location and then save battery life. I don't want that the time dimension will have any effective in this algorithm only the distance.

Note: I have all the right permissions. here's my code (runs in the service):

public class LocationService extends Service{ private static final int SLEEP = 250; private static final int VIBRATE = 500; private double targetLat; private double targetLng; private LocationManager manager; private boolean alarm; private String ring; private Uri soundUri; private long [] pattern; private float minDistance; @Override public IBinder onBind(Intent intent) { return null; } @SuppressWarnings("static-access") @Override public int onStartCommand(Intent intent, int flags, int startId) { SharedPreferences settings = PreferenceManager.getDefaultSharedPreferences(this); targetLat = Double.parseDouble(settings.getString("lat", "0")); targetLng = Double.parseDouble(settings.getString("lng", "0")); targetLat = targetLat / 1E6; targetLng = targetLng / 1E6; alarm = settings.getBoolean("alarm", true); ring = settings.getString("ringDet", ""); if(ring != ""){ soundUri = Uri.parse(ring); } manager = (LocationManager)getSystemService(Context.LOCATION_SERVICE); manager.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 0, minDistance, location); manager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, minDistance, location); pattern = new long[200]; pattern[0] = 0; for(int i = 1; i < pattern.length; i++){ if(i % 2 != 0){ pattern[i] = SLEEP; } else{ pattern[i] = VIBRATE; } } //return super.onStartCommand(intent, flags, startId); return super.START_STICKY; } LocationListener location = new LocationListener() { public void onStatusChanged(String provider, int status, Bundle extras) {} public void onProviderEnabled(String provider) {} public void onProviderDisabled(String provider) {} @SuppressWarnings("deprecation") public void onLocationChanged(Location location) { float [] results = new float [3]; Location.distanceBetween(targetLat, targetLng, location.getLatitude(), location.getLongitude(), results); float distance = results[0]; if(distance > 20000){ minDistance = 15000; toaster(">20000"); manager.removeUpdates(this); } else if(distance > 10000){ minDistance = 5000; toaster(">10000"); manager.removeUpdates(this); } else if(distance > 5000){ minDistance = 2500; toaster(">5000"); manager.removeUpdates(this); } else if(distance > 2500){ minDistance = 1000; toaster(">2500"); manager.removeUpdates(this); } else if(distance > 1000){ minDistance = 0; toaster(">1000"); } if(distance < 800 && alarm){ Notification notification = new Notification(R.drawable.ic_launcher, "WakeApp", System.currentTimeMillis()); Intent notificationIntent = new Intent(getApplicationContext(),MainActivity.class); PendingIntent contentIntent = PendingIntent.getActivity(getApplicationContext(), 0, notificationIntent, 0); notification.setLatestEventInfo(getApplicationContext(), getApplicationContext().getResources().getString(R.string.notification_message), "", contentIntent); NotificationManager notificationManager = (NotificationManager)getSystemService(Context.NOTIFICATION_SERVICE); //notification.defaults |= Notification.DEFAULT_VIBRATE; notification.defaults |= Notification.DEFAULT_LIGHTS; notification.icon = R.drawable.ic_launcher; if(soundUri != null){ notification.sound = soundUri; } else if(soundUri == null){ soundUri = RingtoneManager.getActualDefaultRingtoneUri(getApplicationContext(), RingtoneManager.TYPE_ALARM); notification.sound = soundUri; } notification.vibrate = pattern; alarm = false; notificationManager.notify(1, notification); manager.removeUpdates(this); stopSelf(); } } }; private void toaster(String text){ Toast.makeText(this, text, Toast.LENGTH_SHORT).show(); } 
Popularity: 5.0 Answer #12511170, count #1, created: 2012-09-20 10:55:04.0

You can use following method to calculate distance between two gps position, in meter.

public static float distFrom (float lat1, float lng1, float lat2, float lng2 ) { double earthRadius = 3958.75; double dLat = Math.toRadians(lat2-lat1); double dLng = Math.toRadians(lng2-lng1); double a = Math.sin(dLat/2) * Math.sin(dLat/2) + Math.cos(Math.toRadians(lat1)) * Math.cos(Math.toRadians(lat2)) * Math.sin(dLng/2) * Math.sin(dLng/2); double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); double dist = earthRadius * c; int meterConversion = 1609; return new Float(dist * meterConversion).floatValue(); } 

Original source is my answer.

For GPS Co-Ordinate i have created one Library code like below,

public class LocListener implements LocationListener { private static double lat =0.0; private static double lon = 0.0; private static double alt = 0.0; private static double speed = 0.0; private static long dateTime; public static double getLat() { return lat; } public static double getLon() { return lon; } public static double getAlt() { return alt; } public static double getSpeed() { return speed; } // Added By Kalpen Vaghela on 17 09 2012 public static long getDateTime() { return dateTime; } @Override public void onLocationChanged(Location location) { lat = location.getLatitude(); lon = location.getLongitude(); alt = location.getAltitude(); speed = location.getSpeed(); dateTime = location.getTime(); } @Override public void onProviderDisabled(String provider) {} @Override public void onProviderEnabled(String provider) {} @Override public void onStatusChanged(String provider, int status, Bundle extras) {} } 
Title: How to measure the energy consumed by my Android app in a certain moment Id: 12514078, Count: 197 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-09-20 13:48:39.0 Body:

I have been trying to figure this out since couple months ago because I'm working on my thesis, but unfortunately I have not been able to make it. What I'm doing is a performance analysis regarding energy consumed by an application running an algorithm locally on the phone and the same app but running the algorithm on the cloud and getting the response back to the phone. What I want to do is to try to find an accurate way to obtain the energy consumed by this algorithm since it's executed until it gets the solution (locally and remotely).

Is there any way that I can use in java to do it by using the Android API? I would like to write my own code to get the measures. Please anything you guys think it is helpful PLEASE, let me know... I appreciate your time and patience,

Alberto.

Popularity: 11.0 Answer #12514601, count #1, created: 2012-09-20 14:17:16.0

I am not sure if there is such an API available. Here is a link with similar discussion. If you want because energy/battery consumption depends on may other factors including the efficiency of the compiler.

Title: Tools for profiling power consumption of Apps in Android? Id: 12557273, Count: 198 Tags: Answers: null AcceptedAnswer: null Created: 2012-09-23 23:49:21.0 Body:

I have an app and I want to know how much power do different components within the apps consume. I want it to provide a resolution up to the thread level (if possible). I have read about EProf, I contacted the authors, they are under a patenting process right now so its not an option. I was wondering if there exist any other free tools that provide fine grained power profiling.

I was also looking at another tool called the Power Tutor but it provides the power consumed by major system components.

Any and all help will be greatly appreciated.

Thanks!

Popularity: 2.0 Title: Android App power consumption Id: 12581333, Count: 199 Tags: Answers: 4 AcceptedAnswer: null Created: 2012-09-25 10:44:53.0 Body:

How to check the power consumption in each applications in Android?
At least the power consumption should be relative when compared with different applications and these applications might be using any of the services like WIFI,GPS,LCD,wakelock, etc.

Are there any APIs in android regarding the same in order to measure the power consumption for the applications using the above resources?

Popularity: 26.0 Answer #12581516, count #1, created: 2012-09-25 10:55:49.0

One easy guide to see which is consuming how much battery from ICS onwards is to check Settings->Battery. Here it shows the % consumed by the app. Other ways could be to physically monitor battery drop by using the app intensively, for e.g. battery may be 80% before u start using the app. Then you try for 30 minutes and then check battery % again.

Answer #12582179, count #2, created: 2012-09-25 11:39:57.0

There is a research paper called “Accurate Online Power Estimation and Automatic Battery Behavior Based Power Model Generation for Smartphones”. For this paper the researchers developed a tool called PowerTutor, the sources of which you can find here. It should be mentioned that your device has to be rooted to use this application.

Answer #13223096, count #3, created: 2012-11-04 21:26:40.0

Check out Powertutor. It is in my experience the best application out there to measure application wise power consumption. It is not 100% accurate but is good enough for estimations.

Answer #16111203, count #4, created: 2013-04-19 18:24:22.0

(disclaimer: I co-founded the company which built the below mentioned product)

Try Little Eye from Little Eye Labs, that does exactly this. One can track an individual app's power and get the breakdown by CPU/display & Wifi (the upcoming version will support GPS and 3G). It also goes beyond power and tracks data, memory and CPU consumption of an individual app. Note its a desktop tool that you need to download and install from here.

Hope this helps.

Title: Read power input from usb Id: 12584365, Count: 200 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-09-25 13:49:56.0 Body:

Is there any way to read the power input from a USB connection in Android?

For example plug in a micro usb lead into the phone from either a computer USB output, a mains charger, or something similar and read the voltage and amp output?

I have no idea where to start looking for this information, and when searching I can only find battery widgets, or power consumption apps - this is not what I'm looking for, I want to be able to read how much power is being sent via the USB interface that is connected.

Many thanks.

Popularity: 8.0 Answer #12584522, count #1, created: 2012-09-25 13:57:52.0

Probably, You should start with BatteryManager , refer to this question for some examples of usage. I believe the data provided by BatteryManager should be enough to provide approximate calculation of what You want to (how much power is being sent via the USB). Having device id and table of devices default batteries capacities or/and users input about capacity and % difference with time interval and voltage probably would give enough info for approximate calculation of consumption

Another (dirty) way might be the following - dig android sources if ones have any info about total capacity / consumption You can obtain e.g. with java reflection. I think BatteryManager and BatteryStatsImpl could be appropriate places to start analysis from.

Title: Best architecture for a long running service on Android Id: 12692636, Count: 201 Tags: Answers: 1 AcceptedAnswer: 12693132 Created: 2012-10-02 14:48:20.0 Body:

I would appreciate some guidance on how to deal with OS killing a long run service.

Business scenario:

Application records a BTT track which may last for several hours. It can also show the track on map together with relevant statistics.

The application user interface enables the user to start/stop track recording and view the real time track on a map.

After start track recording user can exit the application and turn screen off (to save power), and only a service will remain running to keep the recording update to database (notification shown), until the user starts again the activity and ask for stop recording, which results in service termination.

Issue:

After a variable time, which runs from 40 minutes to 1 hour and a half, the recording service gets killed without any warning. As BTT outings may take several hours, this result in track recording incomplete.

Some additional information:

Service is started with START_STICKY and acquires a PARTIAL_WAKE_LOCK, and runs in the same process as the main activity.

New locations are acquired (and recorded) at user defined rate from 1 second to several minutes. I know from the Android documentation that this is the expected OS behavior for long running services.

Question:

What is the best architecture design approach to have a well behaved application that could satisfy the business scenario requirements?

I can think of a couple of options (and I don’t like any of them), but I would like guidance from someone how have already faced and solved similar issue:

  • Use broadcast receiver (ideally connected to Location Manager if that’s possible) to have the service only running when a new location is acquired?
  • Do not enable the user to leave the main activity (resulting in pour user experience)?
  • Have an alarm broadcast receiver restarting the service if needed?

Thanks to all who could share some wisdom on this subject.

Popularity: 17.0 Answer #12693132, count #1, created: 2012-10-02 15:13:19.0

I have an app that does a very similar thing. I make sure the service keeps running by making it a foreground task. When I am ready to start running, I call this function, which also sets up a notification:

void fg() { Notification notification = new Notification(R.drawable.logstatus, "Logging On", System.currentTimeMillis()); Intent notificationIntent = new Intent(this, LoggerActivity.class); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); notification.setLatestEventInfo(this, "Logger","Logger Running", pendingIntent); startForeground(1, notification); } 

and then to leave foreground mode when logging is finished:

stopForeground(true); 
Title: Android - Incorrect Information about Battery Usage Id: 12720351, Count: 202 Tags: Answers: null AcceptedAnswer: null Created: 2012-10-04 04:39:21.0 Body:

I'm using LG Optimus 4X, original Android v4.0.3 (Build IML74K). I have problem with Android displaying information about battery use: Settings\Power saver\Battery use, it reads:

Android System > 90% Maps: A few percentages. Screen: 0% (not displayed, it's odd). Wifi: 0% (not displayed, although it's been connected to an AP for 4 hours). 

(After five hours of heavy use)

I believe something is wrong with the way my phone calculates battery usage of running application. I uses some apps a lot, such as Google Reader, Google+, Chrome, Gmail, Flipboard... I spend hours every day on these apps.

The problems are:

1) I hardly find most frequently used on the battery use list.

2) And when I check battery use of Android System, it reads:

CPU Total: less than 5 minutes CPU foreground: tens of seconds Keep awake: 10 to 20 minutes Data sent: A few megabytes. Data received: Several hundred kilobytes. 

(After five hours of heavy use)

After 5 hours of heavy use, more than 90% battery juice was spent on Android System and it works for less than 20 minutes?

Popularity: 5.0 Title: Android WebView consumes lots of power when the application is running in the background Id: 12764175, Count: 203 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-10-06 21:50:13.0 Body:

I have a WebView inside my android app, and this WebView is running a website with a fair bit of Javascript on it. Users have reported high power consumption when my app is running in the background, and I expect it is due to this javascript. However, I don't want to completely unload or remove the WebView, as this would hurt the time-to-resume.

Is there any way to selectively turn off Javascript and/or disable the WebView completely when the app is in the background (onPause())?

Popularity: 4.0 Answer #17789169, count #1, created: 2013-07-22 13:43:43.0

Accoring to http://www.mcseven.me/2011/12/how-to-kill-an-android-webview/ the only working way is to redirect to empty page with no javascript (and return back after resume).

In case of PhoneGap, the page should contain a code to return itself:

<!DOCTYPE html><html><head> <script type="text/javascript" src="js/phonegap/cordova-2.9.0.js"></script> <script type="text/javascript" src="js/jquery/jquery-1.9.1.min.js"></script> </head><body> <script type="application/javascript"> $(document).on('deviceready', function() { $(document).on('resume', function() { return history.back ? history.back() : history.go(-1); }); }); </script> </body></html> 
Title: Do reading of RSSI cause any air request to BLE device? Id: 12766763, Count: 204 Tags: Answers: 1 AcceptedAnswer: 12771945 Created: 2012-10-07 06:54:26.0 Body:

I'm implementing iOS library that reads heart rate sensor data using Bluetooth 4.0 (AKA Bluetooth Smart AKA BLE).

I noticed that RSSI value is being updated only on demand (readRSSI) - otherwise is't always the same.

My doubt is: "Does reading or RSSI cause additional request to the sensor?". My consideration is about power saving of the device.

I suppose my question is generally to BLE, don't think it's iOS-specific...

Any thoughts are appreciated.

Popularity: 5.0 Answer #12771945, count #1, created: 2012-10-07 19:11:27.0

readRSSI reports the RSSI being averaged over an active connection. So if you have a connection to your sensor, reading RSSI doesn't cause any additional overhead. Even if you aren't exchanging user data, your BT devices are periodically communicating to keep synchronized with an active connection, and RSSI can be measured from this ongoing communication.

If you are tearing down your connection, then yes, you will have to reconnect to actually measure RSSI.

Title: How to check Android app performance in terms of Processor utilization and power consumption Id: 12801161, Count: 205 Tags: Answers: null AcceptedAnswer: null Created: 2012-10-09 13:28:27.0 Body:

I want to know if there exists any process, tool or API using which we can test Android applications in terms of processor utilization, RAM utilization and power consumption.

Popularity: 2.0 Title: Booting an embedded device from a saved snapshot every time? Id: 12860734, Count: 206 Tags: Answers: 0 AcceptedAnswer: null Created: 2012-10-12 14:01:11.0 Body:

I'd like to know if there is any information out there on booting linux from a saved snapshot upon every boot.

I have an embedded system running linux with some basic applications. When there is nothing to do it will enter hibernate mode, suspending to flash storage and resuming from there later. However, my application does not require me to keep the exact state the device was in, and to save on time and power consumption as well as not wearing out the flash I thus want to resume from a fixed image instead of saving the current state.

I suppose one can do a fresh boot each time, but I'm having issues with long booting times and high energy consumption during boot.

I was unable to find anything on the subject by searching so I'd appreciate it if someone could enlighten me or point me in the right direction.

Is it easiest to write a Linux kernel module implementing it myself, or is there some finished code available out there?

Popularity: 6.0 Title: Access to Power consumption Details on Mac OS X Id: 12997971, Count: 207 Tags: Answers: null AcceptedAnswer: null Created: 2012-10-21 12:52:20.0 Body:

Currently I'm searching for a way to access the Data about the power consumption of Devices that run Mac OS X. I'm mostly interested in the Power Usage in Watts. The IOPowerSources.h of IOKit provides lots of Information, but it's only for Devices powered by Batteries or UPS-Devices. Is there a way to access the Power Usage of a Mac that's not powered by Batteries?

Thank you!

Popularity: 7.0 Title: Setting On windows' High Performance power plan using C++ winAPI Id: 13007925, Count: 208 Tags: Answers: 1 AcceptedAnswer: 13008393 Created: 2012-10-22 08:58:28.0 Body:

Ive written a code trying to activate the windows' high perfomance power plan through the winAPI in c++. It seems to work well for all the power plans (in my terminal they are called Balanced, Power saver and Dell) excepting for the one im interested in, the high performance plan! I would like the code to go over through all the power plans and when finding the high performance one just set it on and then quit. Ill put my code underneath in case anybody can help me. thanks in advance!

#include <windows.h> #include <powrprof.h> #include <iostream> #include "stdio.h" #include <ntstatus.h> #include <string> #pragma comment(lib, "powrprof.lib") using namespace std; int main(int argc, char **argv) { ////////////////// SET ACTIVE HIGH PERFORMANCE PLAN /////////////////// //Variables UCHAR displayBuffer[64] = " "; DWORD displayBufferSize = sizeof(displayBuffer); GUID buffer; DWORD bufferSize = sizeof(buffer); //Go throught the machine's power plans and activate the high performance one for(int index = 0; ; index++) { if (ERROR_SUCCESS == PowerEnumerate(NULL,NULL,&GUID_VIDEO_SUBGROUP,ACCESS_SCHEME,index,(UCHAR*)&buffer,&bufferSize) ) { if (ERROR_SUCCESS == PowerReadFriendlyName(NULL,&buffer,&NO_SUBGROUP_GUID,NULL,displayBuffer,&displayBufferSize) ) { wprintf(L"%s\n", (wchar_t*)displayBuffer); if( 0 == wcscmp ( (wchar_t*)displayBuffer, L"High Performance" ) ) { cout << "High Performance Plan Found!\n"; if (ERROR_SUCCESS == PowerSetActiveScheme(NULL,&buffer) ) { cout << "* Setting Active High Performance Power Plan *"; //std::cin.get(); //pause break; } } } } else break; } return 0; 

}

Popularity: 6.0 Answer #13008393, count #1, created: 2012-10-22 09:26:55.0

This can be done a bit easier:

PowerSetActiveScheme(0, &GUID_MIN_POWER_SAVINGS); 
Title: How can I write a test application to fully load the CPU? Id: 13034505, Count: 209 Tags: Answers: 3 AcceptedAnswer: 13034612 Created: 2012-10-23 15:54:51.0 Body:

The CPU is designed to drop into low power modes whenever it can to save power and keep cool, I'd like to make a program to prevent that from happening.

I'm working on a few different embedded platforms (Freescale Coldfire 8052, TI Sitara AM3359, probably a few others in the pipeworks) and so I wanted to make an application that will just keep the CPU fully loaded for benchmarking. I want to write my own since it would be easier to cross-compile then to look for a solution per target.

My initial thought was just:

while(1); 

Question 1:
But at I over simplifing this? top shows that program taking about 99.4% CPU usage, so I guess it's working, but it doesn't seem like it should be so simple. :) Anyone know if there should be more to it than that?

Question 2:
If I wanted to expand this to do different loads (say, 50%, 75%, or whatever) how could I do that? I managed to get a 18~20% CPU usage via:

while(1){usleep(1);} 

Is there a more, scientific way rather than just guessing and checking at sleep values? I would think these would be different per target anyway.

Popularity: 16.0 Answer #13034612, count #1, created: 2012-10-23 16:00:15.0

while(1); will eat up all your CPU cycles but won't exercise most parts of your CPU (let alone the GPU). Most modern CPUs have the ability to selectively switch off individual execution units if they're not used: the only way to prevent it is:

  1. tell the CPU/SoC driver to disable power saving
  2. exercise all units of your CPU/GPU/chipset and whatnot (this will be a hell of a task to realize, so you're probably better off with (1))
Answer #13034683, count #2, created: 2012-10-23 16:03:50.0

So I'll try to post this as an answer then. If you look at what the specs for usleep are, you'll notice the following line:

The usleep() function will cause the calling thread to be suspended from execution...

This means that 18~20% CPU usage was actually time spent during context switching. The while(1) in your code will use CPU cycles because it gets scheduled but it wont use the CPU to its full capability. There are a lot of options there for C programs that will try to use 100% CPU. Most of them use multiple threads mixed with math-based applications.

See this thread for a number of examples.

Answer #13034685, count #3, created: 2012-10-23 16:03:55.0

A while(1); loop will most likely be running all the time the operating system isn't doing other things like handling interrupts or running daemons. The problem you have is that top doesn't actually show how much time your program has been actually running, but a rather crude estimate that is used for internal scheduling calculations. On some systems you can end up with over 100% cpu usage just because the math is a little bit off.

When it comes to loading your cpu properly, it depends on what you want to do. Touch every part of the cpu? Maximal power usage? It's not an easy question, especially when you probably don't know what the question actually is.

Title: Service or Asynctask for sampling sensors in Android? Id: 13110862, Count: 210 Tags: Answers: 2 AcceptedAnswer: 13110889 Created: 2012-10-28 16:47:48.0 Body:

I've to acquire data from various sensors like accelerometer, gyroscope, microphone and GPS. The sensing action shouldn't be continuous, but rather single short intervals of sampling should be periodically scheduled according to various policy (for example power saving). Each sensor sampling action lasts few seconds, say 5 seconds. I would realize a "Client" for each sensor, deputed to the listening of sensor data when necessary, and a "Controller" that control the execution of Clients, but I'm not sure about the way to realize this.

Is it correct to realize a Service for each Client? or would be better a simple AsyncTask or Handler?

It's better if each Client is responsible of a single sensing action, executed in a single onStartService(), or if the onStartService() action enable a periodic action of sampling? Help would be appreciated.

Popularity: 7.0 Answer #13110889, count #1, created: 2012-10-28 16:50:55.0

This sounds like a task for a Service, that is triggered by Alarms at (regular) scheduled intervals.

An AsyncTask is usually something that is started after the user has done some interaction and the system is supposed to do a "long running" operation (like network i/o), which could otherwise block the UI.

Note that it is very well possible to also trigger a service like an AsyncTask - have a look at IntentService.

Answer #13111344, count #2, created: 2012-10-28 17:43:09.0

I suggest that you have a look at my answer regarding a similar question here: Service v/s AsyncTask.

Personally, I would use a simple Handler to post a task to run with a specific time interval.

Example:

private Handler mHandler = new Handler(); private void startTimer(Runnable Task, long delay) { mHandler.removeCallbacks(Task); mHandler.postDelayed(Task, delay); } private void stopTimer(Runnable Task) { mHandler.removeCallbacks(Task); } private Runnable registerListeners = new Runnable() { public void run() { startTimer(registerListeners, 10*60*1000); //register to run again in 10 minutes startTimer(unregisterListeners, 5*1000); //to unregister in 5 seconds //here register your listeners } }; private Runnable unregisterListeners = new Runnable() { public void run() { //here unregister your listeners } }; 

When you want to start the listening process:

//To start you listeners startTimer(registerListeners, 0); 

When you want to stop everyting:

//To stop registering/unregistering listeners stopTimer(registerListeners); 

Note: If you are doing long running code in your listeners, then you should have a look to the answer in the link I gave above.

Regards.

Title: UIViewController persistence - save state into memory and write it to the disk when the app is terminated Id: 13140989, Count: 211 Tags: Answers: null AcceptedAnswer: null Created: 2012-10-30 14:20:55.0 Body:

I am using NSCoding to save and restore my view controllers. However I'm saving to disk the navigation stack + view controllers every time a view controller is pushed or popped.

This is not energy efficient, and there's a better way.

Think of NSUserDefaults. It saves the changes somewhere in memory (and if called repeatedly as it is, just overwrite) and when the synchronize method is called it writes them to disk. This is genuinely done, and is super energy efficient.

So can I implement something like this? On every call to save changes somewhere in memory and when a synchronize/writeToDisk method is called to purge the memory cache and write them to disk. Any ideas will be greatly appreciated!

My idea is to use NSCache, and in application:willResignActive or application:willTerminate to get the object in the cache and write it do disk.

Thanks so much!

Popularity: 5.0 Title: PubNub long polling vs sockets - mobile battery life Id: 13205453, Count: 212 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-11-03 02:38:25.0 Body:

I recently began using PubNub in my iOS app, and am very happy with it. However, I have been looking at the other options available, such as Pusher and Realtime.co, which use Websockets. PubNub, on the other hand, uses long polling. I have done my own little speed comparisons and for my purposes, I find that they are all fast enough.

PubNub offers some nice features like message history and a list of everyone in the channel , so barring everything else I am leaning toward them. My question is, should I be concerned with battery life and heavy usage with a long-polling solution like PubNub? Would a Websockets solution be significantly more power efficient?

Popularity: 21.0 Answer #13206181, count #1, created: 2012-11-03 05:14:28.0

PubNub on Mobile with Battery Saving

As a preface to battery performance and efficiency, PubNub is an optimized service for mobile devices on-the-go when compared to alternative or self-hosted websocket solutions. PubNub offers a catch-up feature on Mobile phones that will automatically redeliver messages that were missed, especially for devices that are moving between cell-network towers and changing from 3G/4G to WiFi. Websockets tend to be unrecommended for mobile due to reliability in common scenarios and that is why PubNub will select the best transport for your device automatically; so you don't have to decide what makes the most sense for the phone in transit.

Battery Savings Pattern with PubNub

PubNub has a keep-alive connection that is uncommonly long and set to one hour. A ping is sent each 300 seconds (300,000ms). This long enough to provide the best mix between mobile performance and battery saving.

Battery Saving Tips on Mobile

  1. Keeping messages as small as possible.
  2. Sending Fewer messages less frequently.
  3. Connect to only one channel rather than two or more.

Automatic Transport Detection

PubNub will automatically select the best transport for you when needed especially on mobile devices. An interesting conversation about websockets occurred in Portland, Oregon this last October 2012 at KRTConf that I recommend to you https://speakerdeck.com/3rdeden/realtimeconf-dot-oct-dot-2012

Let me know if this was helpful.

Answer #13384358, count #2, created: 2012-11-14 17:50:55.0

I don't think this is correct. See http://eon.businesswire.com/news/eon/20120927005429/en/Kaazing/HTML/HTML5

I am the one who actually did the testing for Kaazing on comparing WebSocket and regular http-based message transfers. I saw a drastic decrease in battery consumption with WebSocket. Now Kaazing has additional technology above and beyond WebSocket to reduce battery consumption, but even if you don't use Kaazing, you will still see some battery consumption efficiencies with WebSocket. I tried this out myself by running actual tests even for basic WS versus http without any special battery optimization algorithms.

Title: Any tool/software equivalent of Microsoft "JouleMeter" in linux/Ubuntu Id: 13286662, Count: 213 Tags: Answers: null AcceptedAnswer: null Created: 2012-11-08 09:59:36.0 Body:

"JouleMeter" is a tool from Microsoft for measuring power consumption by different processes on a windows machine.

Please tell me if there is any similar tool on linux for getting information of energy consumption by different processes and applications on linux machine. Also I am looking for an open-source solution.

Popularity: 2.0 Title: Android OpenGL ES 1.1 poor performance under ICS - eglSwapBuffers taking 60+ms Id: 13319360, Count: 214 Tags: Answers: null AcceptedAnswer: null Created: 2012-11-10 04:49:43.0 Body:

I am currently working on an opengl ES 1.1 game, min sdk 8, target sdk 17 and I am seeing an unusually large delay in eglSwapBuffers on the 4.0.3 tablets I have access to that does not appear on either newer or older versions of android. I'm also lucky enough to have access to two Asus transformers T101's, one of which is running Honeycomb so can rule out that this is a limitation of the hardware.

The Following table shows a number of devices, and their performance.

 Model OS FPS Max/avg eglSwapBuffer time (ms) Acer A500 4.0.3 12.5 70 /67 Asus T101 3.1 21.8 30/21 Asus T101 4.0.3 12.2 79/65 HTC One X 4.0.4 35.9 14/5 Nexus 7 4.1.2 35.6 16/5 Samsung GTP1000 2.2 24.5 36/28 

You can see that even the ancient single core GTP1000 gets double the framerate of the dual core 4.0.3 devices.

I am running 4 threads,A 2D Canvas, GL, UI and a heartbeat that updates AI every 200 ms. During eglSwapBuffers calls on the GL thread the the CPU is doing pretty much nothing. It feels like I'm triggering some sort of power save or GPU throttling but I haven't been able to find any information about this.

The time taken in swap buffers alone limits the frame rate to <15FPS if all other operations cost nothing. I am drawing 150 textured triangle strip quads per frame.

If you can point me in the right direction I'd be thrilled, I've been banging my head against a brick wall for the better part of a week.

Popularity: 15.0 Title: Multi-Criteria Optimization with Reinforcement Learning Id: 13343336, Count: 215 Tags: Answers: 1 AcceptedAnswer: 13463887 Created: 2012-11-12 12:00:19.0 Body:

I am working on the power management of a system. The objectives that I am looking to minimize are power consumption and average latency. I have a single objective function having the linearly weighted sum of both the objectives:

C=w.P_avg+(1-w).L_avg, where w belongs to (0,1) 

I am using Q-learning to find a pareto-optimal trade-off curve by varying the weight w and setting different preference to power consumption and average latency. I do obtain a pareto-optimal curve. My objective, now, is to provide a constraint (e.g., average latency L_avg) and thus tuning/finding the value of w to meet the given criteria. Mine is an online algorithm, so the tuning of w should take place in an online fashion.

Could I be provided any hint or suggestions in this regard?

Popularity: 6.0 Answer #13463887, count #1, created: 2012-11-19 22:58:47.0

There is a multiple-objective Reinforcement Learning branch in the community.

The idear is to 1:

assign a family of agents to each objective. The solutions obtained by the agents in one family are compared with the solutions obtained by the agents from the rest of the families. A negotiation mechanism is used to find compromise solutions satisfying all the objectives.

Also there a paper that might be interest to you:

Multi-objective optimization by reinforcement learning for power system dispatch and voltage stability.

I did not find a public url for it though.

Title: GPS doesn't search on my Android Id: 13363396, Count: 216 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-11-13 15:18:48.0 Body:

I'm new on android programation and I have a problem with my aplication. My Gps just doesn't search for location, or anything else. And yes, my GPS is tunned on. The manifest cointains the permitions: ACCESS_COARSE_LOCATION and ACCESS_FINE_LOCATION.

Could somebody help me?

public class LocationTest extends Activity implements LocationListener { private static final String[] A = { "invalid", "n/a", "fine", "coarse" }; private static final String[] P = { "invalid", "n/a", "low", "medium", "high" }; private static final String[] S = { "out of service", "temporarily unavailable", "available" }; private LocationManager mgr; private TextView output; private String best; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mgr = (LocationManager) getSystemService(LOCATION_SERVICE); output = (TextView) findViewById(R.id.output); log("Location providers:"); dumpProviders(); Criteria criteria = new Criteria(); best = mgr.getBestProvider(criteria, true); log("\nBest provider is: " + best); log("\nLocations (starting with last known):"); if (best != null) { Location location = mgr.getLastKnownLocation(best); dumpLocation(location); } } @Override protected void onResume() { super.onResume(); // Start updates (doc recommends delay >= 60000 ms) if (best != null) { mgr.requestLocationUpdates(best, 15000, 1, this); } } @Override protected void onPause() { super.onPause(); // Stop updates to save power while app paused mgr.removeUpdates(this); } public void onLocationChanged(Location location) { dumpLocation(location); } public void onProviderDisabled(String provider) { log("\nProvider disabled: " + provider); } public void onProviderEnabled(String provider) { log("\nProvider enabled: " + provider); } public void onStatusChanged(String provider, int status, Bundle extras) { log("\nProvider status changed: " + provider + ", status=" + S[status] + ", extras=" + extras); } /** Write a string to the output window */ private void log(String string) { output.append(string + "\n"); } /** Write information from all location providers */ private void dumpProviders() { List<String> providers = mgr.getAllProviders(); for (String provider : providers) { dumpProvider(provider); } } /** Write information from a single location provider */ private void dumpProvider(String provider) { LocationProvider info = mgr.getProvider(provider); StringBuilder builder = new StringBuilder(); builder.append("LocationProvider[") .append("name=") .append(info.getName()) .append(",enabled=") .append(mgr.isProviderEnabled(provider)) .append(",getAccuracy=") .append(A[info.getAccuracy() + 1]) .append(",getPowerRequirement=") .append(P[info.getPowerRequirement() + 1]) .append(",hasMonetaryCost=") .append(info.hasMonetaryCost()) .append(",requiresCell=") .append(info.requiresCell()) .append(",requiresNetwork=") .append(info.requiresNetwork()) .append(",requiresSatellite=") .append(info.requiresSatellite()) .append(",supportsAltitude=") .append(info.supportsAltitude()) .append(",supportsBearing=") .append(info.supportsBearing()) .append(",supportsSpeed=") .append(info.supportsSpeed()) .append("]"); log(builder.toString()); } /** Describe the given location, which might be null */ private void dumpLocation(Location location) { if (location == null) log("\nLocation[unknown]"); else log("\n" + location.toString()); } } 
Popularity: 7.0 Answer #13363736, count #1, created: 2012-11-13 15:38:12.0

I normally don't do this, but I almost have to go. This is the code I use, it works. (just put this in a new project). I didn't clean it, because I ripped it from my other project, but it does work, when you make a new project and just copy/paste this.:

import java.util.Timer; import java.util.TimerTask; import android.location.Location; import android.location.LocationListener; import android.location.LocationManager; import android.os.Bundle; import android.app.Activity; import android.content.Context; import android.widget.Toast; public class MainActivity extends Activity { Timer timer1; LocationManager lm; boolean gps_loc = false; boolean gps_enabled=false; boolean network_enabled=false; double lat; double lng; String gps_location; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); getLocation(this, locationResult); } public LocationResult locationResult = new LocationResult() { public void gotLocation(final Location location) { try { lat = location.getLatitude(); lng = location.getLongitude(); if (lat != 0.0 && lng != 0.0) { String sLat; String sLng; sLat = Double.toString(lat); sLng = Double.toString(lng); gps_location = sLat + " " + sLng; Toast.makeText(getBaseContext(), "We got gps location!", Toast.LENGTH_LONG).show(); System.out.println("We got gps"); System.out.println("lat = "+lat); System.out.println("lng = "+lng); } } catch (Exception e) { } } }; public boolean getLocation(Context context, LocationResult result) { //I use LocationResult callback class to pass location value from MyLocation to user code. locationResult=result; if(lm==null) lm = (LocationManager) context.getSystemService(Context.LOCATION_SERVICE); //exceptions will be thrown if provider is not permitted. try{gps_enabled=lm.isProviderEnabled(LocationManager.GPS_PROVIDER);}catch(Exception ex){} try{network_enabled=lm.isProviderEnabled(LocationManager.NETWORK_PROVIDER);}catch(Exception ex){} //don't start listeners if no provider is enabled if(!gps_enabled && !network_enabled){ return false; } if(gps_enabled){ lm.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, locationListenerGps); } if(network_enabled) lm.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 0, 0, locationListenerNetwork); timer1=new Timer(); timer1.schedule(new GetLastLocation(), 35000); return true; } LocationListener locationListenerGps = new LocationListener() { public void onLocationChanged(Location location) { timer1.cancel(); locationResult.gotLocation(location); lm.removeUpdates(this); lm.removeUpdates(locationListenerNetwork); } public void onProviderDisabled(String provider) {} public void onProviderEnabled(String provider) {} public void onStatusChanged(String provider, int status, Bundle extras) {} }; LocationListener locationListenerNetwork = new LocationListener() { public void onLocationChanged(Location location) { timer1.cancel(); locationResult.gotLocation(location); lm.removeUpdates(this); lm.removeUpdates(locationListenerGps); } public void onProviderDisabled(String provider) {} public void onProviderEnabled(String provider) {} public void onStatusChanged(String provider, int status, Bundle extras) {} }; class GetLastLocation extends TimerTask { @Override public void run() { lm.removeUpdates(locationListenerGps); lm.removeUpdates(locationListenerNetwork); Location net_loc=null, gps_loc=null; if(gps_enabled) gps_loc=lm.getLastKnownLocation(LocationManager.GPS_PROVIDER); if(network_enabled) net_loc=lm.getLastKnownLocation(LocationManager.NETWORK_PROVIDER); //if there are both values use the latest one if(gps_loc!=null && net_loc!=null){ if(gps_loc.getTime()>net_loc.getTime()) locationResult.gotLocation(gps_loc); else locationResult.gotLocation(net_loc); return; } if(gps_loc!=null){ locationResult.gotLocation(gps_loc); return; } if(net_loc!=null){ locationResult.gotLocation(net_loc); return; } locationResult.gotLocation(null); } } public static abstract class LocationResult{ public abstract void gotLocation(Location location); } } 

Also add this in manifest:

 <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.INTERNET" /> 

No time to explain now, maybe tomorrow if you still need it.

It prints the latitude and longitude in your logcat.

Title: Best practice for sending live location data to server from iPhone Id: 13450388, Count: 217 Tags: Answers: 1 AcceptedAnswer: 13485685 Created: 2012-11-19 08:55:29.0 Body:

I want to develop an application for the iphone, that tracks the current position of the user in "realtime" and sends this data to a sql database on a webserver by a web service, so I will be able to have a consistent database where the registered users a currently positioned.

This is raising some questions on how to do this in an efficient way.

1) Shall I really update the GPS data in realtime on the server? Isn't this to "heavy" regarding energy consumption on the iphone? Maybe once a minute does it as well? What are best practices here when I want to be as accurate as possible?

2) What if there a maybe 1000 users at once.... is this still efficient to update a database with the current GPS data simultaneously?

Thank you in advance Sebastian

Popularity: 2.0 Answer #13485685, count #1, created: 2012-11-21 03:01:30.0

Sending GPS data at regular time intervals will put extra load on the system unnecessarily and it is not optimal.

A better way to track the user is to send data only when the user has moved outside a circle of radius R from the last reported location.

That way a user moving in a car at 60 mph on a highway and one walking at 0.1 mph in a park will both be tracked accurately.

Users are known to stop moving when they sit down to eat, go to the bathroom or sleep.

Title: Why should I calibrate the oscillator in AVR programming Id: 13538080, Count: 218 Tags: Answers: 3 AcceptedAnswer: 13542704 Created: 2012-11-24 03:52:29.0 Body:

I'm new to AVR programming. I found a sample code on web; for a simple USART communication with PC. I have a little doubt there.

The main loop starts like this;

void main(){ OSCCAL_calibration(); USARTinit(); //start communicating with PC } 

I can't understand the reason for calibrating the oscillator, using OSCCAL_calibration(); function.


FUNCTIONS

OSCCAL_calibration() function

void OSCCAL_calibration(void){ unsigned char calibrate = 0; int temp; unsigned char tempL; CLKPR = (1<<CLKPCE); CLKPR = (1<<CLKPS1) | (1<<CLKPS0); TIMSK2 = 0; ASSR = (1<<AS2); OCR2A = 200; TIMSK0 = 0; TCCR1B = (1<<CS10); TCCR2A = (1<<CS20); while((ASSR & 0x01) | (ASSR & 0x04)); for(int i = 0; i < 10; i++) _delay_loop_2(30000); while(!calibrate){ cli(); TIFR1 = 0xFF; TIFR2 = 0xFF; TCNT1H = 0; TCNT1L = 0; TCNT2 = 0; while ( ! (TIFR2 && (1<<OCF2A)) ); TCCR1B = 0; // stop timer1 sei(); if ( (TIFR1 && (1<<TOV1)) ){ temp = 0xFFFF; }else{ tempL = TCNT1L; temp = TCNT1H; temp = (temp << 8); temp += tempL; } if (temp > 6250){ OSCCAL--; } else if (temp < 6120){ OSCCAL++; }else calibrate = 1; TCCR1B = (1<<CS10); } } 

USARTinit() function

void USARTinit(){ CLKPR = (1<<CLKPCE); CLKPR = (1<<CLKPS1); UBRR0H = 0; UBRR0L = 12; UCSR0A = (1<<U2X0); UCSR0B = (1<<RXEN0)|(1<<TXEN0)|(0<<RXCIE0)|(0<<UDRIE0); UCSR0C = (0<<UMSEL00)|(0<<UPM00)|(0<<USBS0)|(3<<UCSZ00)|(0<<UCPOL0); } 

I'm using Atmel Studio 6 and tested this with atmega2560 (actually, with my Arduino Mega). After a bit of changes, I could make it work. But it still works without the calibration function..

I'll itemize my questions as below.

  1. What do you really do as calibrating the oscillator?
  2. Is it a must?
  3. Is there a similar function in PIC micro-controllers? (I'm a bit experienced in PIC programming. But never knew about something like that)

Also got a little doubt;

Why do you set a clock pre-scalar in USARTinit() function before setting the baud rate? can't you set the baud rate without a pre-scalar (which means, pre-scalar = 1)

Is it to save power or something? But i tried with pre-scalar=1, it seems not working (still trying). Yeah i've calculated the baudrate properly (using the given equation in datasheet).

Popularity: 9.0 Answer #13538158, count #1, created: 2012-11-24 04:06:29.0

I know nothing about this particular hardware but a quick Google showed up this datasheet

To quote:

The majority of the present AVR microcontrollers offer the possibility to run from an internal RC oscillator. The internal RC oscillator frequency can in most AVRs be calibrated to within +/-1% of the frequency specified in the datasheet for the device. This feature offers great flexibility and significant cost savings compared to using an external oscillator. The calibration performed in the Atmel factory is made at a fixed operating voltage and temperature (25°C, typically 5V). As the frequency of the internal RC oscillator is affected by both operating voltage and temperature, it may be desired to perform a secondary calibration, which matches the specific application environment. This secondary calibration can be performed to gain higher accuracy than the standard calibration offers, to match a specific operating voltage or temperature, or even to tune the oscillator to a different frequency.

Answer #13542704, count #2, created: 2012-11-24 15:39:23.0

If you are doing any timing related communications outside the microcontroller (serial, pushing spi to limits, etc) or keeping time or whatever then you need a more accurate clock.

It is not really about power, marginally perhaps, if the clock is a little slow then you use more power if a little fast then you save a little power.

Many but not all microcontrollers offer an internal R/C oscillator so that you dont need to have an external oscillator (extra components, extra cost). This is not one family vs another (avr, msp430, pic, etc) some chips within a family have internal oscillators some dont. The PIC's I used back in the day required an external, dont know the family in that detail today. How the calibration happens also varies from family to family.

Answer #13829263, count #3, created: 2012-12-11 22:04:11.0

The need for calibration depends on your actual hardware:

  1. If your microcontroler uses a Xtal to generate its clock you don't have to calibrate anything, only choose the good divider, depending on your Xtal oscillation frequency.

  2. If you only use an RC oscillator, then since it can have more important value tolerance you might want to calibrate the frequency, especially when using higher baud rates (like 19200 or more).

There is pre-scaller that you must set depending on your oscillator frequency, see the datasheet for more details.

Title: image/video processing options Id: 13558510, Count: 219 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-11-26 04:06:08.0 Body:

I have a small 12 volt board camera that is placed inside a bee hive. It is lit with infrared LEDs (bees can't see infrared). It sends a simple NTSC signal along a wire to a little TV monitor I have. This allows me to see the inside of the hive, without disturbing the bees.

The queen has a dot on her back such that it is very obvious when she's in the frame.

I would like to have something processing the signal such that it registers when the queen is in the frame. This doesn't have to be a very accurate count. Instead of processing the video, it would be just as fine to take an image every 10 seconds and see if there is a certain amount of brightness (indicating that the queen is in frame).

This is useful since it helps bee keepers know if the queen is alive (if she didn't appear for a number of days it could mean something is wrong).

I would love to hear suggestions for inexpensive ways of processing this video, especially with low power consumption. Raspberry pi? Arduino?

Camera example: here

Sample video (no queen in frame): here

Popularity: 17.0 Answer #13558649, count #1, created: 2012-11-26 04:26:59.0

First off, great project. I wish I was working on something this fun.

The obvious solution here is OpenCV, which will run on both Raspberry Pi (Linux) and the Android platform but not on an Arduino as far as I know. (Of the two, I'd go with Raspberry Pi to start with, since it will be less particular in how you do the programming.)

As you describe it, you may be able to get away with less robust image processing tools, but these problems are rarely as easy as they seem at first. For example, it seems to me that the brightest spot in the video is (what I guess to be) the illuminating diode reflecting off the glass. But if it's not this it will be something else, so don't start the project with your hands tied behind your back. And if this can't done with OpenCV, it probably can't be done at all.

Raspberry Pi computers are about $50, OpenCV is free, so I doubt you'll get much cheaper than this.

In case you haven't done something like this before, I'd recommend not programming OpenCV directly in C++ for something that's exploratory like this, and not very demanding either. Instead, use, for example, the Python bindings so you can explore the images interactively.

You also asked about Arduino, and I don't think this is such a good choice for this type of project. First, you'd need extra hardware, like a video shield (e.g., http://nootropicdesign.com/ve/), adding to the expense. Second, there aren't good image processing libraries for the Arduino, so you'd be doing everything from scratch. Third, generally speaking, debugging a microcontroller program is more difficult.

Answer #13565157, count #2, created: 2012-11-26 12:47:13.0

I don't have a good answer about image processing, but I know how to make it much easier. When you mark the queen, throw some retro-reflecting beads on the paint to get a much higher light return.

I think you can simply mix the beads in with your paint -- use 1 part beads to 3 parts paint by volume. That said, I think you'll get better results if you pour beads onto the surface of the wet paint when marking the queen. I'd pour a lot of beads on to ensure some stick (you can do it over a bowl or bag to catch all the extra beads.

I suggest doing some tests before marking the queen -- I've never applied beads before, but I've worked with retroreflective tape and paint, and it will give you a significantly higher light return. How much higher strongly depends (i.e. I don't have a number) but I'm guessing at least 2-5 times more light -- enough that your camera will saturate when it sees the queen with current exposure settings. If you set a trigger on saturation of some threshold number of pixels (making sure few pixels saturate normally) this should give you a very good signal to noise ratio that will vastly simplify image processing.to

[EDIT] I did a little more digging, and there are some important parameters to consider. First, at an index of 1.5 (the beads I'd linked before) the beads won't focus light on the back surface and retro-reflect, they'll just act like lenses. They'll probably sparkle and reflect a bit, but you might be better off just adding glitter to the paint.

You can get VERY highly reflective tape that has the right kind of beads AND has a reflective coating on the back of the beads to reflect vastly more light! You'll have to figure out how to glue a bit of tape to a queen to use it, but it might be the best reflection you can get. http://www.amazon.com/3M-198-Scotch-Reflective-Silver/dp/B00004Z49Q

You can also try the beads I recommended earlier with an index of refraction of 1.5. I'd be sure to test it on paper against glitter to make sure you're not wasting your time. http://www.colesafety.com/Reflective-Powder-Glass-Beads-GSB10Powder.htm

I'm having trouble finding a source for 1lb or less glass beads with 1.9+ refractive index. I'll do more searching and I'll let you know if I find a decent source of small quantities.

Title: Android Power modeling like power tutor Id: 13574168, Count: 220 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-11-26 22:22:10.0 Body:

I've been trying to follow what people who made power tutor did. From my understanding the approach the used is hardware specific not a software one. They modeled power consumption based on CPU utilization, wifi, GPS, audio and 3G. According to this paper. Is someone familiar with this approach? I wanted to know if I can model ths for any android device or is it a hardware dependent approach? And the power modeling formula that they used in the paper. What software parameters are they trying to extract for CPU utilization, wifi, GPS, audio and 3G to make the formula or the approach work? It's not very clear reading the paper. Any help in this aspect would be highly appritiated.

Popularity: 2.0 Answer #16386097, count #1, created: 2013-05-05 15:32:34.0

The PowerTutor approach is largely hardware independent but needs to be specifically tailored for each new device. It relies on the availability of hardware system parameters largely through the /proc and /sys directories. For example, CPU utilization is read through /proc/stat and /proc/cpuinfo, GPS data from /data/misc/gps.status, LCD data from /sys/devices/virtual/leds/lcd-backlight/brightness, etc. These system parameters are plugged into the model equation to get an estimate of power consumption. The actual equation is found on page four of their paper and system parameter locations can be seen in the source code (specifically under tree/master/src/edu/umich/PowerTutor/components).

Assuming the same system parameters are available, to make PowerTutor work for a new device you would have to determine their coefficients for the specific device and update the application with the new device and its coefficients. The problem is that PowerTutor model uses coefficients determined by the associated PowerBooter tool, which is not publicly available. There's some description of how PowerBooter obtains the coefficients but you would have to re-implement the tests themselves.

Title: Reduce power conssumption while wireless should be used Id: 13583357, Count: 221 Tags: Answers: 2 AcceptedAnswer: null Created: 2012-11-27 11:35:31.0 Body:

I am working on my MSc dissertation right now. This dissertation has one related app which should be developed by android. One of the objectives of the research is about using wireless to sync data among the phones. In the meanwhile, I must consider power consumption which means reduce power consumption while using wireless. I am thinking about 3 possible wireless technologies for this app:

  • wifi direct (p2p wifi) - fast, reliable, but not supported on many phones (even ones with Android >=4.0) (you would also probably need to two devices to test/develop this)
  • regular wifi - fast, reliable, but requires at least an access point (i.e. some infrastructure, or another mobile device acting in part as an access point but does NOT Need internet access), code partially compatible with wifi direct
  • Bluetooth - slow, unreliable, supported on most devices, requires no infrastructure (also probably need two devices to develop), code least

Regarding these mentioned wireless technologies, I have few questions listed below (please consider that I need acceptable evidences or documentations to support my ideas):

  1. Which of these technologies consume less power?
  2. How can I reduce power consumption in android phone?
  3. Are there any practical strategies available in this case regardless of coding? For example consider sth in setting of the app or sth else?
Popularity: 2.0 Answer #13583677, count #1, created: 2012-11-27 11:52:12.0

I'd recommend investigating Bluetooth Low Energy, or Bluetooth Smart as it is called too. Also, there is ANT+. These both are very conservative regarding power usage.

Answer #13583791, count #2, created: 2012-11-27 11:59:19.0

I am working on my MSc dissertation right now.

My advice would be not to get bogged down with the power optimization issue. Your primary goal should be to get your thesis done ... and do any necessary coding that directly required to achieve that. From reading your question, I get the impression that power usage minimization is at best a peripheral issue (no pun intended). Assuming my impression is correct, you should not wasting your limited time on this.

Now if your primary goal was to produce product quality software, my advice would be different ...

Title: Bluetooth Low energy best energy consumation strategy Id: 13584367, Count: 222 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-11-27 12:35:11.0 Body:

I am going to develop a small device with TI CC2540 inside. It will communicate with iPhone4s. The device is designed to receive commands from iPhone and make specific actions on it. Most of time the device is idle (99% of time). But at any time (in case of iPhone request), it can be able to receive commands from iPhone: 1 command per 20 second with 40bytes max size.

The device should be standalone and work as long as possible.

I see here 2 decisions:

  1. Device - Central, iPhone - Peripheral. The device implements Central GAP role and always scans for advertising packets from iPhone. Then device initializes connection and then iPhone begins to send commands.

2.Device - Peripheral, iPhone - Central. The device always sends advertising packets.

What is the best low energy consumption strategy? How long will it work? What is the best idle strategy for BLE? Can I implement the 1 way with new iOS6 BLE features?

Popularity: 6.0 Answer #13591496, count #1, created: 2012-11-27 19:21:23.0

Make your device the peripheral. The power consumption will heavily depend on your latency requirements. How much delay between user action on phone and device reaction is acceptable? That requirement drives the advertising duty cycle you'll need.

Title: How to monitor the C-states of an Intel (Core 2 Duo) processor? Id: 13650503, Count: 223 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-11-30 17:53:12.0 Body:

I am studying the effects user usage on the power consumption. How do I measure the C-state occupancy in Intel core-2 Duo processor (Windows 7) ? Is there a software which can do this in Windows ?

Popularity: 5.0 Answer #13657547, count #1, created: 2012-12-01 07:53:24.0

Intel does provide a number of tools and guidelines for power, and power-checker in particular lists "core based processors" as supported, and C-state occupancy as one of the features.

Title: Can a low-end ARM (Coretex M0+) run enough of a stack to use a USB wifi dongle? Id: 13710725, Count: 224 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-12-04 19:47:33.0 Body:

I currently use full-blown wifi modules (like Roaving networks RN-174 or the LS research TiWi module (http://www.lsr.com/wireless-products/tiwi-sl)) to interface with lower powered microcontrollers.

However, the low-end ARMs (like the Cortex M0+) are getting very power efficient, and a benefit would be if I could use more commercial wifi dongles (like http://www.trendnet.com/products/proddetail.asp?prod=195_TEW-648UBM) and possibly benefit from additional power savings (the wifi modules I use typically have an ARM processor to run the stack and other parts of the protocol).

Typically, these require a processor running LINUX with a full driver implementation; I was wondering if any driver/stacks existed for the lower-end ARMs to drive a usb wifi dongle?

Thanks!

Popularity: 10.0 Answer #13748986, count #1, created: 2012-12-06 17:18:31.0

I'm not aware of any M0 or M0+ chips with USB host, but it is available on some M3s, for example NXP's LPC17xx series. An LPC1768 is used in the mbed module, and there are a few USB host implementations available for it, including a library for a 3G (not WiFi) Vodafone dongle. There is also a generic USB library for NXP chips - nxpUSBlib.

Depending on the dongle, sometimes it might offer not only USB interface, but also plain UART. In a few cases it's possible to access UART serial interface after minor modification of the dongle. If you have such an interface, you don't need USB at all, and UART is available on pretty much any ARM, no matter how low-end.

Please note that getting the USB or UART connection is only half of the job - you still need to discover how to configure and connect your specific dongle. If it uses a standard protocol like USB CDC/ACM and AT commands, that's good, but it's not guaranteed. Sometimes you'll have to reverse engineer proprietary drivers to discover the magic values. Some modules require the firmware to be sent to them on boot, so you'll have to store the firmware image somewhere. Though if it has a Linux driver, there's a pretty good chance it can be made to work.

Title: Timer coalescing in .net Id: 13747058, Count: 225 Tags: <.net> Answers: 1 AcceptedAnswer: null Created: 2012-12-06 15:35:18.0 Body:

Windows 7 introduced timer coalescing, improving energy efficiency. What managed API exposes timer tolerances? It seems the only way to take advantage of this feature is to p/invoke SetWaitableTimerEx.

Popularity: 7.0 Answer #13748806, count #1, created: 2012-12-06 17:07:28.0

There's no managed API that I'm aware of, but that said, it's one of the not-so-hairy P/Invokes to do - here's a quick-and-dirty class I just threw together and simple usage:

(I should note I've only very basically tested this...probably needs some tweaking) [EDIT: ok, got a chance to tune it a bit over lunch, this should work, more or less]

void Main() { var waitFor = 6000; var tickAt = 2000; var tickEvery = 1000; var sw = Stopwatch.StartNew(); var running = true; var apcTask = Task.Factory.StartNew(() => { try { Console.WriteLine("APC:Creating timer..."); ApcTimer timer = new ApcTimer(@"Global\WillThisWork", tickAt, tickEvery, true); timer.Tick += (o,e) => { Console.WriteLine("APC:Hey, it worked! - delta:{0}", sw.Elapsed); }; Console.WriteLine("APC:Starting timer..."); timer.Start(); while(running); Console.WriteLine("APC:Stopping timer..."); timer.Dispose(); Console.WriteLine("APC:Finishing - delta:{0}", sw.Elapsed); } catch(Exception ex) { Console.WriteLine(ex); } }); Thread.Sleep(waitFor); running = false; Task.WaitAll(apcTask); } public class ApcTimer : IDisposable { public delegate void TimerApcCallback(object sender, EventArgs args); public event TimerApcCallback Tick; private const long _MILLISECOND = 10000; private const long _SECOND = 10000000; private IntPtr _hTimer = IntPtr.Zero; private long _delayInMs; private int _period; private bool _resumeFromSleep; private Task _alerter; private CancellationTokenSource _cancelSource; private bool _timerRunning; public ApcTimer( string timerName, long delayInMs, int period, bool resumeFromSleep) { _hTimer = CreateWaitableTimer(IntPtr.Zero, false,timerName); if(_hTimer == IntPtr.Zero) { // This'll grab the last win32 error nicely throw new System.ComponentModel.Win32Exception(); } _delayInMs = delayInMs; _period = period; _resumeFromSleep = resumeFromSleep; } public void Start() { var sw = Stopwatch.StartNew(); Debug.WriteLine("ApcTimer::Starting timer..."); StopAlerter(); SetTimer(_delayInMs); _cancelSource = new CancellationTokenSource(); _alerter = Task.Factory.StartNew( ()=> { _timerRunning = true; while(_timerRunning) { var res = WaitForSingleObject(_hTimer, -1); if(res == WaitForResult.WAIT_OBJECT_0) { if(Tick != null) { Tick.Invoke(this, new EventArgs()); } SetTimer(_period); } } }, _cancelSource.Token); Debug.WriteLine("ApcTimer::Started!"); } public void Dispose() { Debug.WriteLine("ApcTimer::Stopping timer..."); StopAlerter(); CancelPendingTimer(); if(_hTimer != IntPtr.Zero) { var closeSucc = CloseHandle(_hTimer); if(!closeSucc) { throw new System.ComponentModel.Win32Exception(); } _hTimer = IntPtr.Zero; } Debug.WriteLine("ApcTimer::Stopped!"); } private void SetTimer(long waitMs) { // timer delay is normally in 100 ns increments var delayInBlocks = new LARGE_INTEGER() { QuadPart = (waitMs * _MILLISECOND * -1)}; bool setSucc = false; setSucc = SetWaitableTimer(_hTimer, ref delayInBlocks, 0, IntPtr.Zero, IntPtr.Zero, _resumeFromSleep); if(!setSucc) { // This'll grab the last win32 error nicely throw new System.ComponentModel.Win32Exception(); } } private void CancelPendingTimer() { if(_hTimer != IntPtr.Zero) { Debug.WriteLine("ApcTimer::Cancelling pending timer..."); CancelWaitableTimer(_hTimer); } } private void StopAlerter() { _timerRunning = false; if(_alerter != null) { Debug.WriteLine("ApcTimer::Shutting down alerter..."); _cancelSource.Cancel(); Task.WaitAll(_alerter); } } #region secret pinvoke goodness [DllImport("Kernel32.dll", SetLastError=true)] static extern WaitForResult WaitForSingleObject([In] IntPtr hHandle, int dwMilliseconds); [DllImport("Kernel32.dll", SetLastError=true)] [return:MarshalAs(UnmanagedType.Bool)] static extern bool CancelWaitableTimer([In] IntPtr hTimer); [DllImport("Kernel32.dll", SetLastError=true)] [return:MarshalAs(UnmanagedType.Bool)] static extern bool SetWaitableTimer( [In] IntPtr hTimer, [In] ref LARGE_INTEGER dueTime, [In] int period, [In] IntPtr completionRoutine, [In] IntPtr argToCallback, [In] bool resume); [DllImport("Kernel32.dll", SetLastError=true)] static extern IntPtr CreateWaitableTimer( IntPtr securityAttributes, bool manualReset, string timerName); [DllImport("Kernel32.dll", SetLastError=true)] static extern IntPtr CreateWaitableTimerEx( IntPtr securityAttributes, string timerName, TimerCreateFlags flags, TimerAccessFlags desiredAccess); [DllImport("Kernel32.dll", SetLastError=true)] [return:MarshalAs(UnmanagedType.Bool)] static extern bool CloseHandle(IntPtr handle); private const int INFINITE_TIMEOUT = 1; [Flags] private enum WaitForResult : int { WAIT_ABANDONED = 0x00000080, WAIT_OBJECT_0 = 0, WAIT_TIMEOUT = 0x00000102, WAIT_FAILED = -1 } [Flags] private enum TimerAccessFlags : int { TIMER_ALL_ACCESS = 0x1F0003, TIMER_MODIFY_STATE = 0x0002, TIMER_QUERY_STATE = 0x0001 } [Flags] private enum TimerCreateFlags : int { CREATE_WAITABLE_TIMER_MANUAL_RESET = 0x00000001 } [StructLayout(LayoutKind.Sequential)] public struct LargeIntegerSplitPart { public uint LowPart; public int HighPart; } [StructLayout(LayoutKind.Explicit)] public struct LARGE_INTEGER { [FieldOffset(0)] public LargeIntegerSplitPart u; [FieldOffset(0)] public long QuadPart; } #endregion } 
Title: IOS 6.0.1 framerate drop when models are rendered close to screen Id: 13793172, Count: 226 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-12-09 23:51:37.0 Body:

I have recently updated my iPad2 to iOS 6.0.1 from 5.x and upgraded xcode to 4.5.2. Now when I build and run my game it runs smoothly at 60 fps. However when a a 3d model gets close to the screen the framerate now drops to 40fps and stays there even when the model moves away from the screen. Is there someway to overcome this or is it a bug with iOS 6.0.1 or xcode 4.5.2? Could this be some kind of power saving feature of iOS 6.0.1? Any help would be appreciated.

Popularity: 7.0 Answer #13868732, count #1, created: 2012-12-13 21:25:31.0

As far as I can tell there is no bug in iOS 6.0.1 or Xcode 4.5.2 that can cause this, but I'm just a working iOS developer, not the engineer in charge of quality control on these projects.

With the minimal information available in this question my first guess is that the model is being drawn with a mipmapped texture. When far from the camera smaller versions of the texture are used, but when it moves close to the camera larger versions of the textures are used. A possible check you can do is to turn off the textures for this model and checking performance. Yet, I'm making the assumption that you are using textures, so this suggestion may be way off base.

The best way to diagnose this problem is to use the latest OpenGL debugging and profiling tools available in Xcode 4.5. I suggest watching the session videos from last summer's WWDC on OpenGL for a start on how to use these tools.

Edit to comment on your posted shader:

Shaders can process fragments in parallel, but will reduce the number of parallel processes if the conditionals in a shader evaluate to different values across fragments. To quote page 156 of the OpenGL ES 2.0 Programming Guide:

GPUs typically execute a vertex shader with multiple vertices or a fragment shader with multiple fragments in parallel. The number of vertices or fragments that are executed in parallel will depend on the GPU's performance target. The bool_expression in the if and if-else conditional statements can have different values for the vertices or fragments being executed in parallel. This can impact performance as the number of vertices or fragments executed in parallel by the GPU is reduced. We recommend that for best performance, conditional statements should be used with bool_expression values that are the same for vertices or fragments being executed in parallel. This will be the case if a uniform expression is used.

Maybe GPU happily processes lots of fragments because your boolean values are the same for each fragment. Then the GPU gets fragments where the boolean is no longer evaluating the same, and it no longer processes fragments in parallel. Knowing that these can evaluate to different values, the GPU continues to process the fragments with less parallelism even after the models move further from the camera.

Your series of if statements clamp the values to specific states. Maybe writing an equation that will give similar results may fix the problem. Try something like:

df = (floor(mod((((df + 0.2) * 10.0) / 3.0), 4.0)) * 3.0) / 10.0; 

I didn't actually compile a shader with this code, nor does it exactly match your ranges, so you may need to make some adjustments. But this should keep the parallelism up.

Title: User's location and battery's consumption Id: 13847229, Count: 227 Tags: Answers: 1 AcceptedAnswer: 13847602 Created: 2012-12-12 19:34:37.0 Body:

I am working on an iOS app that's focuses on pedestrians in a block of buildings (campus). I have developed a good enough (I believe) user's location update but, I would like anyone more expert than me to give me some tips, advices on accurate location, battery's issues etc. In addition, is it possible to stop the location update and start it again after n seconds? Is a thought that I did in order to save energy. As it is at the moment, app is detecting current location, but, blue dot (user) is still moving around and I don't like that. There is something that can be done?

Below is my didUpdateToLocation method:

App is reading buildings' info from a file (stored on device)

 - (void)viewDidLoad{ [super viewDidLoad]; if ([self checkForInternet]) { _locationManager = [[CLLocationManager alloc] init]; _locationManager.delegate = self; _locationManager.distanceFilter = 10.0f; _locationManager.desiredAccuracy = 20.0f; [_locationManager startUpdatingLocation]; } } - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { if (newLocation.horizontalAccuracy > manager.desiredAccuracy) return; [self.mapView removeAnnotations:listOfAnn]; [listOfAnn removeAllObjects]; if ([manager locationServicesEnabled]) { for (NSString* row in rows){ NSArray* cells = [row componentsSeparatedByString:@"\t"]; CLLocationCoordinate2D newCoord; newCoord.latitude = [[cells objectAtIndex:5]floatValue]; newCoord.longitude = [[cells objectAtIndex:6]floatValue]; CLLocation *locB = [[CLLocation alloc] initWithLatitude:newLocation.coordinate.latitude longitude:newLocation.coordinate.longitude]; CLLocation *centerLoc = [[CLLocation alloc] initWithLatitude:CAMPUS_LATITUDE longitude:CAMPUS_LONGITUDE]; CLLocationDistance borders = [locB distanceFromLocation:centerLoc]; if ((int)round(borders) > 500) { BuildingViewController *newBuilding = [[BuildingViewController alloc] initBuildingWithName:[cells objectAtIndex:2] coordinates:newCoord shortDescription:[cells objectAtIndex:4] image:(NSString*)[cells objectAtIndex:3] inDistance: borders]; if ([[prefs stringForKey:newBuilding.title] isEqualToString:[prefs stringForKey:@"userName"]]) { [newBuilding retrieveID]; [listOfAnn addObject:newBuilding]; } } else{ CLLocation *locA = [[CLLocation alloc] initWithLatitude:newCoord.latitude longitude:newCoord.longitude]; CLLocationDistance distance = [locA distanceFromLocation:locB]; BuildingViewController *newBuilding = [[BuildingViewController alloc] initBuildingWithName:[cells objectAtIndex:2] coordinates:newCoord shortDescription:[cells objectAtIndex:4] image:(NSString*)[cells objectAtIndex:3] inDistance: distance]; if ((int)round(distance) < 100 || [[prefs stringForKey:newBuilding.title] isEqualToString:[prefs stringForKey:@"userName"]]){ [newBuilding retrieveID]; [listOfAnn addObject:newBuilding]; } } } [self.mapView addAnnotations:listOfAnn]; } } 
Popularity: 5.0 Answer #13847602, count #1, created: 2012-12-12 19:59:40.0

Γεια σου Παναγιώτη.

You should read the official documentation about location services

It is an excellent guide and should get cover everything. I will do a quick recap for you and explain you the pros and cons for each available method, as I have worked extensively with Core Location services for our App:

There are 3 different ways to get the users location in iOS.

  1. Normal GPS location monitorins (pros: very acurate and updates quickly, cons: drains the battery way too fast)
  2. Significant Location services (pros: very battery efficient, can start the app from background, even when terminated if a region is entered, without requiring a background location monitoring task and permission, cons: very inaccurate in both determining the location and receiveing updates of location changes)
  3. Region Monitoring (pros: very battery efficient, can start the app from background, even when terminated if a region is entered, without requiring a background location monitoring task and permission cons: inaccurate in getting notifications about the regions entered, max 20 regions can be monitored by app)

So depending on the accuracy you want, you have to select the service. Of course, if you are going the GPS way, like you use in your code, you should always turn off the location updating after you get an update and request the position again, only after a certain period of time, else your app will drain the user's battery.

And of course you can restart the user's location updates at anytime. If you require location updates in many view controllers in your app, I suggest you create a singleton class for the location manager!

As for GPS monitoring, in your CoreLocation delegate method, didUpdateToLocation, do the following:

- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { NSDate* eventDate = newLocation.timestamp; NSTimeInterval howRecent = [eventDate timeIntervalSinceNow]; //prevent Location caching... (only accepts cached data 1 minute old) if( abs(howRecent) < 60.0) { [self.locationManager stopUpdatingLocation]; //capture the location... (this is your code) //update location after 90 seconds again [self performSelector:@selector(startUpdatingLocation) withObject:nil afterDelay:90]; } } 

and to update the location again:

- (void)startUpdatingLocation { if([CLLocationManager authorizationStatus]==kCLAuthorizationStatusAuthorized || [CLLocationManager authorizationStatus]==kCLAuthorizationStatusNotDetermined) { self.locationManager.desiredAccuracy = kCLLocationAccuracyBest; [locationManager startUpdatingLocation]; } } 

and to be sure we have canceled our performSelector in case the user changes viewController, add this to:

- (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [NSObject cancelPreviousPerformRequestsWithTarget:self]; } 
Title: Correct way to implement a stock price alert system Id: 13873570, Count: 228 Tags: Answers: 1 AcceptedAnswer: 13873622 Created: 2012-12-14 06:07:21.0 Body:

Previously, under desktop environment, to implement a stock price alert system, here is what I do.

  1. Spawn a infinity running Thread.
  2. The thread will perform stock price query from stock server.
  3. The thread will perform all the necessary alerting actions based on retrieved stock price.
  4. The thread sleeps for N period. (N can be let's say 30 minutes)
  5. Go back to 2.

When comes to mobile environment, power efficiency usage is a major consideration. The stock alert mechanism should keep running, even when I "close" the application using back button.

There are 2 ways out from my mind.

Use Service

  1. Spawn a infinity running Service.
  2. The service will perform stock price query from stock server.
  3. The service will perform all the necessary alerting actions based on retrieved stock price.
  4. The service sleeps for N period. (N can be let's say 30 minutes)
  5. Go back to 2.

Use AlarmManager

  1. Install a BroadcastReceiver in AlarmManager.
  2. BroadcastReceiver's onReceive will be triggered in next N period.
  3. When BroadcastReceiver is being triggered, perform stock price query from stock server.
  4. The BroadcastReceiver will perform all the necessary alerting actions based on retrieved stock price.
  5. Before returning from onReceive, install another BroadcastReceiver in AlarmManager for next N period.

I was wondering, which way is better? Is there any better ways other than the 2 ways? It seems to me AlarmManager is better, as we do not require to sleep for a long period, which seems sort of wasting resource?

Popularity: 2.0 Answer #13873622, count #1, created: 2012-12-14 06:13:31.0

Use AlarmManager to trigger a BroadcastReceiver then have the BroadcastReceiver start an IntentService.

A BroadcastReceiver shouldn't do any long-running tasks but it can start a Service to do work. An IntentService (which extends Service) will do work on a worker thread and then self-terminate.

See IntentService

And Extending the IntentService class

In other words you can combine both ways that you are considering but without a continually running Service.

Title: How can i maintain Wifi connection state in Windows RT? Id: 13930993, Count: 229 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-12-18 10:32:31.0 Body:

I am developing a Windows RT app. I require the wifi to maintain it's connection state regardless of it's power saving preference since I'm implementing Qualcomm's Alljoyn library.
If the connection state changes, the p2p connection will be terminated.

How can I keep a certain state like Android's WIFI_MODE_FULL. I browsed through the MSDN but couldn't find any enlightment.

Any help is appreciated.

Popularity: 2.0 Answer #13931954, count #1, created: 2012-12-18 11:28:00.0

Currently there's no direct way to do this, but you may do a workaround. The same that was made for WP7: stream an empty mp3 file from a server. That way the mediaplayer will take care of keeping the Wi-Fi on, but it drains some more battery.

Title: High resolution timer library in C++ Windows? Id: 13948105, Count: 230 Tags: Answers: 3 AcceptedAnswer: null Created: 2012-12-19 08:18:11.0 Body:

Possible Duplicate:
C++ high precision time measurement in Windows

I am developing a program that downloads files from ftp, and am looking for a High-Res timer library to calculate download speed, for now I am using c++ time(NULL), but results aren't accurate.

Is there a simple/easy to use, plug N play kind of c++ library for windows platform ? Something that gives time elapsed in seconds since last call or something similar.

EDIT:

So the QueryPerformanceCounter() was mentioned many times, but going through other threads this is what I found out :

You should be warned that it is based on the CPU frequency. This frequency is not stable when e.g. a power save mode is enabled. If you want to use this API, make sure the CPU is at a constant frequency.

Be warned, though, that intel's SpeedStep technology may change the PerformanceFrequency without your code noticing

*We also tried fixating the thread affinity for our threads to guarantee that each thread always got a consistent value from QueryPerformanceCounter, this worked but it absolutely killed the performance in the application. *

So considering the situation is it advisable to use it ? The performance of the program and reliablity of the timer is very important

Popularity: 11.0 Answer #13948149, count #1, created: 2012-12-19 08:21:50.0

You have QueryPerformanceCounter provided you don't fall into the buggy use-case described in the remarks on the MSDN docs

Example from: How to use QueryPerformanceCounter?

#include <windows.h> double PCFreq = 0.0; __int64 CounterStart = 0; void StartCounter() {     LARGE_INTEGER li;     if(!QueryPerformanceFrequency(&li)) cout << "QueryPerformanceFrequency failed!\n";     PCFreq = double(li.QuadPart)/1000.0;     QueryPerformanceCounter(&li);     CounterStart = li.QuadPart; } double GetCounter() {     LARGE_INTEGER li;     QueryPerformanceCounter(&li);     return double(li.QuadPart-CounterStart)/PCFreq; } int main() {     StartCounter();     Sleep(1000);     cout << GetCounter() <<"\n";     return 0; } 
Answer #13948217, count #2, created: 2012-12-19 08:26:34.0

If you have a compiler that support C++11, then you can simply use std::chrono, if not then boost::chrono is at your service

Answer #13949454, count #3, created: 2012-12-19 09:43:01.0

Use timeGetTime, the resolution should be sufficient for your needs.

Title: Matching a raw GPS readings to a route on a digital map Id: 13975890, Count: 231 Tags: Answers: 1 AcceptedAnswer: null Created: 2012-12-20 15:55:50.0 Body:

I'm developing an android app that reads GPS location every 1-2 minutes (Low sampling to save power) while the user is traveling. At the end, I have a list of raw GPS readings of the user's trip. How to convert this to a route on the map?

This is called Map Matching and still an open topic for research but do you know any open source tools that deals with such problem? Maybe Google Maps provide a webs service that deals with it?

Popularity: 2.0 Answer #13993577, count #1, created: 2012-12-21 16:18:53.0

Yes this is called MapMatching, but there is some research.
There is no open source map matching solution.

But your work is not only the map matching part, much work is also the import of the digital road network. Altogether that could be some months of work. I have done that for a map matching project for Transport for London City GPS Trials.

Much easier would be simply read the GPS evry 1-5 seconds, and you are ready, with a 100% correct route without map matching errors caused by invalid and or outdated data in the road map.

Title: Using Wakelock with media player android Id: 14007456, Count: 232 Tags: Answers: null AcceptedAnswer: null Created: 2012-12-22 23:28:03.0 Body:

I'm doing a research on Android power consumption, my question is about, when I use wake lock with media player something like:

mPlayer.setWakeMode(getApplicationContext(), PowerManager.PARTIAL_WAKE_LOCK); 

Now is it enough to call this?

mPlayer.start(); 

To acquire the wake lock or should I call something like this.

wakeLock.acquire(); 
Popularity: 1.0 Title: Process or Thread level Power Monitoring in Linux Id: 14069885, Count: 233 Tags: Answers: 2 AcceptedAnswer: 14074978 Created: 2012-12-28 12:55:05.0 Body:

I am looking for few tools that can give process or thread level power consumption for the Linux OS. I am looking for something similar to top , vmstat , mpstat , activity monitor (MAC) etc along with power usage even if approximate). I have seen a tool for Andorid, PowerTutor that does a good job for specific Andorid phones. Are there similar tools that can provide statistics for laptops/desktops etc for the linux OS? Any suggestion is appreciated.

Popularity: 6.0 Answer #14070981, count #1, created: 2012-12-28 14:28:00.0

I couldn't see exact power values, just a lot of meta information on power (tested od 64bit Mint Maya). Nevertheless it might be useful to you:

PowerTOP is a Linux tool to diagnose issues with power consumption and power management. In addition to being a diagnostic tool, PowerTOP also has an interactive mode where you can experiment with various power management settings for cases where the Linux distribution has not enabled those settings.

PowerTOP reports which components in the system are most likely to blame for a higher-than-needed power consumption, ranging from software applications to active components in the system. Detailed screens are available for CPU C and P states, device activity, and software activity.

For many years, PowerTOP has been used heavily by Intel, Linux distributors, and various parts of the open source community. We're hoping that our users find the second generation even more useful for their needs.

homepage

another article

installation instructions:

sudo apt-get install powertop 

usage instructions

sudo powertop 
Answer #14074978, count #2, created: 2012-12-28 19:54:03.0
  1. PowerPack 3.0 is a software developed by Virginia Tech for direct measurements of the power consumption of a system’s major components: http://scape.cs.vt.edu/software/powerpack-3-0/

  2. The PAPI Api can provide several performance counters: http://icl.cs.utk.edu/papi/overview/index.html

  3. Power Analyzer for the ARM processor is a joint venture of the University of Michigan, the University of Colorado: http://web.eecs.umich.edu/~panalyzer/

Title: Using startMonitoringSignificantLocationChanges with NSTimer to improve battery usage Id: 14100473, Count: 234 Tags: Answers: 1 AcceptedAnswer: 14128764 Created: 2012-12-31 10:37:23.0 Body:

I'm currently working in an iOS app that other developer started. The app needs to monitor location changes because it needs to know the user position with low precision (hundred meters). The previous implementation of the location stuff was done using an NSTimer and startUpdatingLocation. The execution goes like this:

// Fire each 10 seconds start updating location self.timerPosition = [NSTimer scheduledTimerWithTimeInterval:ti target:self selector:@selector(location) userInfo:nil repeats:YES]; [self.timerPosition fire]; 

Location selector does this

// Configure & check location services enabled ... self.locman.delegate = self; self.locman.desiredAccuracy = kCLLocationAccuracyHundredMeters; [self.locman startUpdatingLocation]; 

And then in the location manager delegate

[manager stopUpdatingLocation]; 

But reading about getting the user location in Apple docs, it seems that the right way to get the location with low-power consumption is to use startMonitoringSignificantLocationChanges.

My question is, is a good decision to keep the location timer in combination with startMonitoringSignificantLocationChanges instead of startUpdatingLocation, or it's a nonsense approach?

I do not need to get location when the app is in the background, but I want to know when the user has changed it's position when the app is active.

Popularity: 9.0 Answer #14128764, count #1, created: 2013-01-02 20:35:13.0

I can tell you that the timer can't and won't be needed when you are using the low-power -startMonitoringSignificantLocationChanges. That method only responds to callbacks from the delegate when the device detects a change. This check for location is not using GPS, it uses Wifi and cell-tower triangulation which is already happening. So by using this method, there is no need to slow it down to save battery life. Just set up the delegate methods and respond accordingly.

I'm not sure what your implementation of location is for, but the region monitoring is also another great way to get location updates using little to no battery. Region monitoring is much more helpful when you have specific locations to monitor rather than just general user location. Hope this clears things up.

Title: SimpleDB Select VS DynamoDB Scan Id: 14150479, Count: 235 Tags: Answers: 1 AcceptedAnswer: 14285885 Created: 2013-01-04 02:58:52.0 Body:

I am making a mobile iOS app. A user can create an account, and upload strings. It will be like twitter, you can follow people, have profile pictures etc. I cannot estimate the user base, but if the app takes off, the total dataset may be fairly large.

I am storing the actual objects on Amazon S3, and the keys on a DataBase, listing Amazon S3 keys is slow. So which would be better for storing keys?

This is my knowledge of SimpleDB and DynamoDB:

SimpleDB:

  • Cheap
  • Performs well
  • Designed for small/medium datasets
  • Can query using select expressions

DynamoDB:

  • Costly
  • Extremely scalable
  • Performs great; millisecond response
  • Cannot query

These points are correct to my understanding, DynamoDB is more about killer. speed and scalability, SimpleDB is more about querying and price (still delivering good performance). But if you look at it this way, which will be faster, downloading ALL keys from DynamoDB, or doing a select query with SimpleDB... hard right? One is using a blazing fast database to download a lot (and then we have to match them), and the other is using a reasonably good-performance database to query and download the few correct objects. So, which is faster:

DynamoDB downloading everything and matching OR SimpleDB querying and downloading that

(NOTE: Matching just means using -rangeOfString and string comparison, nothing power consuming or non-time efficient or anything server side)

My S3 keys will use this format for every type of object

accountUsername:typeOfObject:randomGeneratedKey

E.g. If you are referencing to an account object

Rohan:Account:shd83SHD93028rF

Or a profile picture:

Rohan:ProfilePic:Nck83S348DD93028rF37849SNDh

I have the randomly generated key for uniqueness, it does not refer to anything, it is simply there so that keys are not repeated therefore overlapping two objects.

In my app, I can either choose SimpleDB or DynamoDB, so here are the two options:

  • Use SimpleDB, store keys with the format but not use the format for any reference, instead use attributes stored with SimpleDB. So, I store the key with attributes like username, type and maybe others I would also have to include in the key format. So if I want to get the account object from user 'Rohan'. I just use SimpleDB Select to query the attribute 'username' and the attribute 'type'. (where I match for 'account')

  • DynamoDB, store keys and each key will have the illustrated format. I scan the whole database returning every single key. Then get the key and take advantage of the key format, I can use -rangeOfString to match the ones I want and then download from S3.

Also, SimpleDB is apparently geographically-distributed, how can I enable that though?

So which is quicker and more reliable? Using SimpleDB to query keys with attributes. Or using DynamoDB to store all keys, scan (download all keys) and match using e.g. -rangeOfString? Mind the fact that these are just short keys that are pointers to S3 objects.

Here is my last question, and the amount of objects in the database will vary on the decided answer, should I:

  • Create a separate key/object for every single object a user has
  • Create an account key/object and store all information inside there

There would be different advantages and disadvantages points between these two options, obviously. For example, it would be quicker to retrieve if it is all separate, but it is also more organized and less large of a dataset for storing it in one users account.

So what do you think?

Thanks for the help! I have put a bounty on this, really need an answer ASAP.

Popularity: 19.0 Answer #14285885, count #1, created: 2013-01-11 20:10:07.0

Wow! What a Question :)

Ok, lets discuss some aspects:

S3

S3 Performance is low most likely as you're not adding a Prefix for Listing Keys.

If you sharding by storing the objects like: type/owner/id, listing all the ids for a given owner (prefixed as type/owner/) will be fast. Or at least, faster than listing everything at once.

Dynamo Versus SimpleDB

In general, thats my advice:

  • Use SimpleDB when:

    • Your entity storage isn't going to pass over 10GB
    • You need to apply complex queries involving multiple fields
    • Your queries aren't well defined
    • You can leverage from Multi-Valued Data Types
  • Use DynamoDB when:

    • Your entity storage will pass 10GB
    • You want to scale demand / throughput as it goes
    • Your queries and model is well-defined, and unlikely to change.
    • Your model is dynamic, involving a loose schema
    • You can cache on your client-side your queries (so you can save on throughput by querying the cache prior to Dynamo)
    • You want to do aggregate/rollup summaries, by using Atomic Updates

Given your current description, it seems SimpleDB is actually better, since: - Your model isn't completely defined - You can defer some decision aspects, since it takes a while to hit the (10GiB) limits

Geographical SimpleDB

It doesn't support. It works only from us-east-1 afaik.

Key Naming

This applies most to Dynamo: Whenever you can, use Hash + Range Key. But you could also create keys using Hash, and apply some queries, like:

  • List all my records on table T which starts with accountid:
  • List all my records on table T which starts with accountid:image

However, those are Scans at all. Bear that in mind.

(See this for an overview: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/API_Scan.html)

Bonus Track

If you're using Java, cloudy-data on Maven Central includes SimpleJPA with some extensions to Map Blob Fields to S3. So give it a look:

http://bitbucket.org/ingenieux/cloudy

Thank you

Title: Local Notifications each month in android Id: 14211919, Count: 236 Tags: Answers: 1 AcceptedAnswer: 14212053 Created: 2013-01-08 09:32:35.0 Body:

I am using following code

AlarmManager service = (AlarmManager) this.getSystemService(Context.ALARM_SERVICE); Intent i = new Intent(this, AlarmReceiver.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); PendingIntent pending = PendingIntent.getBroadcast(this, 0, i,PendingIntent.FLAG_CANCEL_CURRENT); Calendar cal = Calendar.getInstance(); // Start 30 seconds after boot completed cal.add(Calendar.SECOND, 30); // // Fetch every 30 seconds // InexactRepeating allows Android to optimize the energy consumption service.setInexactRepeating(AlarmManager.RTC_WAKEUP ,cal.getTimeInMillis(),AlarmManager.INTERVAL_DAY , pending); // service.setRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), // REPEAT_TIME, pending); 

In my code i hope it works after interval of one day and does a broadcast for AlarmReceiver service.

However i want this to happen after exactly one month. E,g if a guy installs the application on 3rd of Jan the next alarm will occur on 3rd of Feburary and so on. How can i put interval of one month.

Popularity: 4.0 Answer #14212053, count #1, created: 2013-01-08 09:40:11.0

Please use the following code. it dont have much difference from your current code.

AlarmManager service = (AlarmManager) this.getSystemService(Context.ALARM_SERVICE); Intent i = new Intent(this, AlarmReceiver.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); PendingIntent pending = PendingIntent.getBroadcast(this, 0, i,PendingIntent.FLAG_CANCEL_CURRENT); Calendar cal = Calendar.getInstance(); // Start 1 month after boot completed cal.add(Calendar.MONTH, 1); // // Fetch every 1 month // InexactRepeating allows Android to optimize the energy consumption service.setInexactRepeating(AlarmManager.RTC_WAKEUP ,cal.getTimeInMillis(),AlarmManager.INTERVAL_DAY , pending); 
Title: Broadcast service not getting called Id: 14285369, Count: 237 Tags: Answers: 1 AcceptedAnswer: 14285439 Created: 2013-01-11 19:32:59.0 Body:

Using following code

 Intent i = new Intent(this, BootUpReceiverRecall.class); sendBroadcast(i); <receiver android:process=":remote" android:name="BootUpReceiverRecall"></receiver> public class BootUpReceiverRecall extends BroadcastReceiver { // Restart service every 30 seconds private static final long REPEAT_TIME = 1000 * 30; @Override public void onReceive(Context context, Intent intent) { AlarmManager service = (AlarmManager) context .getSystemService(Context.ALARM_SERVICE); Intent i = new Intent(context, BootUpReceiver.class); PendingIntent pending = PendingIntent.getBroadcast(context, 0, i, PendingIntent.FLAG_CANCEL_CURRENT); Calendar cal = Calendar.getInstance(); // Start 30 seconds after boot completed cal.add(Calendar.SECOND, 30); // // Fetch every 30 seconds // InexactRepeating allows Android to optimize the energy consumption service.setInexactRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), REPEAT_TIME, pending); // service.setRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), // REPEAT_TIME, pending); } 

My BootUpReceiver never gets called. what wrong am i doing ?

Popularity: 1.0 Answer #14285439, count #1, created: 2013-01-11 19:37:53.0

You need to define it properly in AndroidManifest.xml :

<receiver android:process=":remote" android:name=".BootUpReceiverRecall" /> 

Take a look at "android:name" tag,
you need to add a dot(".") before the "BootUpReceiverRecall" if it is in the same package as your application,but if it is not you can simple use the full name like "app.package.receivers.BootUpReceiverRecall".

Title: Measuring and Exporting System Performance on Android Phones in Battery Drainage Experiment Id: 14288772, Count: 238 Tags: Answers: null AcceptedAnswer: null Created: 2013-01-12 00:18:20.0 Body:

I am working on an experiment that requires measuring battery drainage and system performance impact for an android app. The app has multiple features that can be turned on and off individually, hence allowing me to measure the impact of each component independently.

The app is a connectivity monitoring tool that gathers information about signal strength, performs QoS tests for both mobile and wi-fi connections, logs information about call duration and GPS info, etc.

To measure the power consumption (hence battery drainage), I will be running the phone using the Monsoon Power Monitor which will be connected to a PC to collect the required data. I will be establishing a baseline first for each Android phone that I will use for the experiment and then perform the app component measurements.

However, I still haven't figured out the best way to measure the system performance metrics (Memory Usage, CPU Load, etc...) in a way that can help me fulfill the following requirements:

  • External Measurement: the measurement needs to be independent of my app. That is, I don't want to add code to log system performance information from within my app.
  • Exportable: the measurements need to be associated with time and can also be exported for analysis alongside the measurements from the Monsoon Power Monitor.
  • Rooting the Phone: the method of measuring phone's system performance metrics CANNOT be one that requires rooting of the phone. The environment that is simulated is a typical end user, who usually wouldn't root their phone.

From my understanding, there is no way to do an external (totally hardware external) method to gather the performance information. Hence, the next best method is to use another app and establish a base line for that app. However, considering the aforementioned requirements, what application (or potentially a different method) would help me achieve the described objective?

Thanks!

Popularity: 10.0 Title: how to put application on sleep mode Id: 14318655, Count: 239 Tags: Answers: 1 AcceptedAnswer: null Created: 2013-01-14 12:45:14.0 Body:

In order to power saving, I want to put my android application in the sleep mode. How can I manually put my application on the sleep mode ?

In sleep mode, I know it uses less CPU and RAM. For this reason, I want .

Popularity: 3.0 Answer #14318694, count #1, created: 2013-01-14 12:47:52.0

How can I manually put my application on the sleep mode ?

Applications do not go into "sleep mode". Devices go into sleep mode.

You cannot manually put a device into sleep mode, as the user may be using it at the time. Please allow the device to fall asleep normally.

In sleep mode, I know it uses less CPU and RAM.

Applications do not use less RAM when the device is in sleep mode. The device's CPU is powered down (or put into a low-power state) during sleep mode, but, again, this is a characteristic of a device, not of an application.

Title: AlarmManager - Wake with WiFi? Id: 14362612, Count: 240 Tags: Answers: 1 AcceptedAnswer: 14362894 Created: 2013-01-16 16:03:57.0 Body:

So I've created the majority of my application but I am having an issue with power saving applications interfering with it. I use the AlarmManager to run a piece of code that send information to a server every x minutes (minimum 1h), the main issue I am having is that power managers are disabling with WiFi because the device is sleeping.

What's the most effective way to ensure WiFi is available at wakeup? Is it to simply enable WiFi and reconnect it?

Popularity: 1.0 Answer #14362894, count #1, created: 2013-01-16 16:19:24.0

Maybe an other way to your solution is to listen for the connection_changed intents. That way you know there is a connection to the internet and you can upload.

You can also enable WiFi but you will need permissions for that: (I guess these)

<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"></uses-permission> <uses-permission android:name="android.permission.CHANGE_WIFI_STATE"></uses-permission> WifiManager wifi = (WifiManager) getSystemService(Context.WIFI_SERVICE); wifi.setWifiEnabled(enabled); 

Haven't tried it but this should do the trick.

Title: Statistics on Location Settings vs. Power Consumption Id: 14473773, Count: 241 Tags: Answers: 1 AcceptedAnswer: 14474060 Created: 2013-01-23 06:32:55.0 Body:

Is there any data on how GPS power consumption changes with different CLLocationManager settings, such as desired accuracy and distance filter? How it changes between different iPhone models? How it differs by location?

For my social location-aware app, more accuracy is always better, but no specific amount is required.

Popularity: 4.0 Answer #14474060, count #1, created: 2013-01-23 06:56:36.0

Take a look at the section "Tips for Conserving Battery Power" in documentation. In particular this part:

Use lower-resolution values for the desired accuracy unless doing so would impair your app. Requesting a higher accuracy than you need causes Core Location to power up additional hardware and waste power for precision you are not using. Unless your app really needs to know the user’s position within a few meters, do not put the values kCLLocationAccuracyBest or kCLLocationAccuracyNearestTenMeters in the desiredAccuracy property. And remember that specifying a value of kCLLocationAccuracyThreeKilometers does not prevent the location service from returning better data. Most of the time, Core Location can return location data with an accuracy within a hundred meters or so using Wi-FI and cellular signals.

source: Apple Documentation

Title: Source code power profiling Id: 14484719, Count: 242 Tags: Answers: 1 AcceptedAnswer: null Created: 2013-01-23 16:39:03.0 Body:

I was wondering if there exist power profiling tool for programs which reports results at source code level. For example, profiling results which report the power consumption at specific source code lines, functions, modules, etc.

For me is not important the language and platform. Just want to know if there is such animal.

Popularity: 2.0 Answer #14485131, count #1, created: 2013-01-23 16:57:32.0

There is research being done on this right now at universities, but it’s still in an experimental stage, and I’m not aware of commercial tools for this yet.

A professor at my alma mater is working on this, and he calls it Green Mining: The Effect of Software Change on Power Consumption. Right now it involves hooking up a Kill-a-Watt with USB to another computer and recording lots of data while running controlled tests on the software. For mobile devices it gets even more complicated, because you have to wire up circuit boards to measure the power drain on the battery in real time:

mobile phone power monitoring setup

Eventually there will be statistical models that, based on data gathered by running power tests over all sorts of other code, will be able to give you power profiles of source code without all this hardware. Your IDE will warn you: “Are you sure you want to do that? That will reduce average laptop battery life by 3 minutes compared to this other way of doing it.” That is a very long way off, though.

I do vaguely remember hearing that one of the initial results was that the depth of the class inheritance hierarchy is positively correlated with power consumption … Browse these papers if you’re interested!

Title: how to calculate data center energy consumption by server virtualizatiozation Id: 14528244, Count: 243 Tags: Answers: null AcceptedAnswer: null Created: 2013-01-25 18:38:52.0 Body:

i wanted to know how i can calculate data center energy consumption before server virtualization after server virtualization and what is consolidation ratio. please clear my concepts with giviing me one example i try to find this kind of example on many sites but not get accurate responce from any site i have seen calculator in this site

http://www.apcmedia.com/salestools/WTOL-7B6SFC_R0_EN.swf 

in which i can calculate server server annual energy saving annual energy cost but i have no idea what kind of formulas implemented behind creating this kind of calculator if any one have idea kindly share with me bcz it is the part of my project

Popularity: 0.0 Title: Is it possible to tweak the capture rate on demand while the camera in iPhone is open all the time? Id: 14558208, Count: 244 Tags: Answers: null AcceptedAnswer: null Created: 2013-01-28 08:32:41.0 Body:

I am a novice in iOS development.
Currently I want to minimize the energy consumed by the camera. I think this may be done by tweaking the capture rate of the camera when the situation only requires a low capture frame. Is it possible to tweak the capture rate on demand while the camera is open all the time?

Popularity: 0.0 Title: When to call [clLocationManager stopUpdatingLocation] Id: 14568453, Count: 245 Tags: Answers: 1 AcceptedAnswer: null Created: 2013-01-28 18:16:11.0 Body:

To save power in my app I have decided to use a mix of startUpdatingLocation when the app is active and go into startMonitoringSignificantLocationChanges mode when the app is in the background. Basically I do the following when the app goes into the background:

-(void)applicationDidEnterBackground:(UIApplication *)application{ [myLocationManager startMonitoringSignificantLocationChanges]; } 

And when the app comes back into the foreground I do the following:

-(void)applicationDidBecomeActive:(UIApplication *)application{ //Other irrelevant code [myLocationManager stopMonitoringSignificantLocationChanges]; [myLocationManager startUpdatingLocation]; } 

This seems logical, to me anyways. My question is, should I be calling the stopUpdatingLocation method in the applicationDidEnterBackground event? Like so:

-(void)applicationDidEnterBackground:(UIApplication *)application{ [myLocationManager stopUpdatingLocation]; [myLocationManager startMonitoringSignificantLocationChanges]; } 

Where exactly should I be calling the stopUpdatingLocation method?? Please tell me if there is more than one place where this should be done. I'm assuming any error event should stop the updating?

Popularity: 6.0 Answer #14901587, count #1, created: 2013-02-15 19:04:35.0

I don't see anything wrong with what you are doing. Note that I have a commercial app that heavily uses location services, and I'm in the midst of rewriting it to improve it's performance and minimize battery usage.

My released version uses sigLocationChanges predominantly (in background & foreground), but switches to using startUpdatingLocation when unhappy with the quality of the location sigLocationChanges gives me, since my UI has to display the users location roughly accurately. I call stopUpdatingLocation immediately after each event to minimize battery drain. In my shipping version this seems to work okay, but my log files have found a tiny subset of users who seem to constantly get poor locations and I'm spinning up the GPS hardware more than I like.

Also in Privacy Settings, the type of location icon displayed for your app will be determined by when you last used the full GPS location mode. Mine always shows the location icon that indicates a heavy battery impact, even if I'm only using startUpdatingLocation briefly a couple times per day, which can make my users paranoid about how my app affects their battery life.

In my new release, to minimize the battery drain of using of startUpdatingLocation, I've cut it's use back to hopefully nil. When the app activates, I now get the current location directly from the location manager, cLLocMgr.location. Typically that's an accurate location, and my UI can be instantly drawn (or refreshed) correctly. I also check it again when certain views are activated to ensure if the user is moving while keeping my app open the display keeps up. Now I only spin up the GPS hardware if the phone has a bad location in a specific situation where a good location is absolutely required in the app. In that case, I limit it's use to 2 minutes (I'm assuming 2 minutes is long enough to get best location from GPS hardware), and wait at least 10 minutes before allowing it's use again.

Your question doesn't give me enough info to tell how accurate you need to be and how dynamic your location display is. But unless you need super accuracy and dynamic display, you should consider just using the current location without spinning up the GPS hardware to save battery.

Title: windows 8 desktop: bluetooth event listener Id: 14573752, Count: 246 Tags: Answers: 1 AcceptedAnswer: 14609020 Created: 2013-01-29 00:32:14.0 Body:

I'm writing a windows 8 desktop app for the tablet that tracks the bluetooth radio status in order to monitor power consumption. Basically, I want to find out the initial radio status, as well as receive callbacks whenever the status changes. I've looked through the MSDN bluetooth functions (http://msdn.microsoft.com/en-us/library/windows/desktop/aa362927%28v=vs.85%29.aspx) but haven't been able to find anything about the event callback.

Can someone please point me in the right direction? Is there a way to do this (preferably in C#, but C/C++ is fine as well)?

Thank you

Popularity: 3.0 Answer #14609020, count #1, created: 2013-01-30 16:36:10.0

I think there is no Bluetooth event regarding radio removal. You will be able to use the general hardware events to see device removal, that set-up by function RegisterDeviceNotification. See e.g. http://msdn.microsoft.com/en-gb/library/windows/desktop/aa363431.aspx http://www.codeproject.com/Articles/14500/Detecting-Hardware-Insertion-and-or-Removal

Title: Explanation of GPGPU energy efficiency relative to CPU? Id: 14586233, Count: 247 Tags: Answers: 2 AcceptedAnswer: 14588307 Created: 2013-01-29 15:21:04.0 Body:

I've heard the statement that for many applications GPUs are more energy efficient than multi-core CPUs, particularly when the graphics hardware is well utilized. I'm having trouble finding papers, articles, or anything describing the specific architectural features that result in that claim, or a scientific study directly comparing the energy consumption of GPUs to CPUs on a set of benchmarks. Could anyone provide more insight into the backing of this claim, or point me to some studies that show evidence for it?

If I had to guess, I would say that it mostly stems from the lower frequencies of GPU clocks. Additionally, this paper:

http://accel.cs.vt.edu/sites/default/files/paper/huang-hppac09-gpu.pdf

suggests that it's partially a result of GPUs just getting the problem done quicker, so even though the peak power consumption is higher for GPUs the time they spend at that level is much shorter than CPUs (again, for the right problems). Can anyone add anything more?

Thanks!

Popularity: 7.0 Answer #14587352, count #1, created: 2013-01-29 16:20:57.0

Usually, these claims are backed by comparing the GFLOPs performance and estimating the power per floating point operation as shown in this post. But this is essentially what you wrote in your last sentence.

You also have to take into account that the CPU and GPU architectures target different problems. Whereas a CPU core (at least on x86) has deep pipelines, a large instruction set and very sophisticated caching strategies to cater to a wide array of problems, a GPU core is rather simple and thus draws a lot less power. To make up for this, there are many more computing cores in a GPU than in a CPU. But you probably know that already.

Answer #14588307, count #2, created: 2013-01-29 17:09:42.0

TL;DR answer: more of the transistors in a gpu are actually working on the computation than in a cpu.

The big power efficiency-killer of today's cpus is a trade-off to allow general computation on the chip. Whether it is a RISC, x86, or other cpu architecture, there is extra hardware dedicated to the general purpose usage of the cpu. These transistors require electricity, although they are not doing any actual math.

Fast cpus require advanced branch prediction hardware and large cache memory to be able to avoid lengthy processing which could be discarded later in the pipeline. For the most part, cpus execute their instruction one at a time (per cpu core, SIMD helps out cpus as well...), and handle conditions extremely well. Gpus rely on doing the same operation on many pieces of data (SIMD/vector operation), and suffer greatly with simple conditions found in 'if' and 'for' statements.

There is also a lot of hardware used to fetch, decode, and schedule instructions -- this is true for cpus and gpus. This big difference being that the ratio of fetch+decode+schedule transistors to computating transistors tends to be much higher for a gpu.

Here is an AMD presentation (2011) about how their gpus have changed over time, but this really applies to most gpus in general. PDF link. It helped me understand the power advantage of gpus by knowing a bit of the history behind how gpus got to be so good at certain computations.

I gave an answer to a similar question a while ago. SO Link.

Title: getting gps location iOS Id: 14641740, Count: 248 Tags: Answers: null AcceptedAnswer: null Created: 2013-02-01 07:40:48.0 Body:

I'm creating an app that is getting the gps location and I get my code to work when I put it into a view controller, but not with nsobject class.

View Controller Code(works fine):

 - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if(self) { // Create location manager object locationManager = [[CLLocationManager alloc] init]; // There will be a warning from this line of code; ignore it for now [locationManager setDelegate:self]; // And we want it to be as accurate as possible // regardless of how much time/power it takes [locationManager setDesiredAccuracy:kCLLocationAccuracyBest]; // Tell our manager to start looking for its location immediately [locationManager startUpdatingLocation]; } return self; } - (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations { // If it's a relatively recent event, turn off updates to save power CLLocation* location = [locations lastObject]; NSDate* eventDate = location.timestamp; NSTimeInterval howRecent = [eventDate timeIntervalSinceNow]; if (abs(howRecent) < 15.0) { // If the event is recent, do something with it. NSLog(@"latitude %+.6f, longitude %+.6f\n", location.coordinate.latitude, location.coordinate.longitude); } } - (void)locationManager:(CLLocationManager *)manager didFailWithError:(NSError *)error { NSLog(@"Could not find location: %@", error); } - (void)dealloc{ // Tell the location manager to stop sending us messages [locationManager setDelegate:nil]; } 

Class when calling [class startStandardUpdates](does not work):

 - (void)startStandardUpdates{ // Create the location manager if this object does not // already have one. if (!locationManager){ locationManager = [[CLLocationManager alloc] init]; } NSLog(@"StartStandardUpdateds Called"); locationManager.delegate = self; locationManager.desiredAccuracy = kCLLocationAccuracyKilometer; // Set a movement threshold for new events. locationManager.distanceFilter = 500; [locationManager startUpdatingLocation]; } // Delegate method from the CLLocationManagerDelegate protocol. - (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations { // If it's a relatively recent event, turn off updates to save power CLLocation* location = [locations lastObject]; NSDate* eventDate = location.timestamp; NSTimeInterval howRecent = [eventDate timeIntervalSinceNow]; if (abs(howRecent) < 15.0) { // If the event is recent, do something with it. NSLog(@"latitude %+.6f, longitude %+.6f\n", location.coordinate.latitude, location.coordinate.longitude); } } 

I think it has something to do with the delegate, but I'm not sure why when I try to make it a class it doesn't work. Cheers!

edit: Location manager is an instance variable in the class gps. In my view controller i create an instance of the class gps gps *gps = [[gps alloc] init] in the - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil method and call the method [gps startStandardUpdates].

Popularity: 9.0 Title: Does the amount of data stored in the memory (e.g. DDR3) affect its power consumption? Id: 14650973, Count: 249 Tags: Answers: 1 AcceptedAnswer: 14885941 Created: 2013-02-01 16:41:07.0 Body:

I am not familiar with memory principles. This may be a naive question.

Suppose I have a DRAM with 1GB, like the one in my tablet.

Will there be power consumption differences when it holds 200MB data, or 500MB data?

Popularity: 2.0 Answer #14885941, count #1, created: 2013-02-14 23:27:16.0

If the data is just present in memory and not being accessed, No.

But the power consumption will increase in following cases:

  1. You read memory - uses read circuit - very likely that you will read data when it is present in the memory

  2. When you alter/write memory - large consumption when bit alterations in cells take place from 0 to 1 or 1 to 0

Also, there is a constant leakage current at all time the device is on.

Title: Power Usage of (virtual) Sensors Android Id: 14719418, Count: 250 Tags: Answers: null AcceptedAnswer: null Created: 2013-02-06 00:17:17.0 Body:

Looking through the capabilities of my Nexus 4 I noticed that Sensors seem to be reported multiple times.

I haven't worked with sensors (or Smartphones in general) before, so I used some apps to get an overview: the Device Analyzer from AndroidFragmentation.com and the Sensors Explorer both bring up 15 sensors. You can see the results here.

And while according to iFixIt.com there is an Invensense MPU-6050 built in as (only) Gyroscope and Accelerometer, Android reports:

  • 2 Sensors by LGE
    • "LGE Accelerometer Sensor"
    • "LGE Gyroscope Sensor"
  • 2 by Qualcomm
    • "Linear Acceleration"
    • "Rotation Vector"
  • 4 by Google
    • "Rotation Vector Sensor"
    • "Linear Acceleration Sensor"
    • "Orientation Sensor"
    • "Corrected Gyroscope Sensor"

According to Sensor List in Samsung GT-I9300 some of those sensors will be "virtual". However, what is actually interesting to me at the moment is the power consumtion of the sensors. And that is the point where I get really confused.

Take the accelerometer as an example: "LGE Accelerometer Sensor" reports 0.5 mA, whereas the "Linear Acceleration" (Qualcomm) reports 4.1 mA and "Linear Acceleration Sensor" (Google) reports 9.1 mA. All three have the same Resolution (0.0011901855 SU), LGE and Qualcomm have the same maximum range (39.226593 SU), while Google reports 19.6133 SU.

I first thought this may give access to different operation modes, which would explain the differing values, but then again, why would this explain other vendors.

Now: How many Accelerators are actually present? Are they really redundant, or are they just virtual access paths to the same device? If so, why does the power usage differ so vastly? And why the range?

Update According to the Specifications the Gyroscope will drain a current of 3.6mA (matching "LGE Gyroscope Sensor", all other report 9.1mA) and the Accelerometer might vary between 500µA in normal operation Mode and 10µA @ 1.25Hz to 110µA @ 40Hz in low power mode.

With a voltage of 3V (typical according to Specs) this yields 10.8mW for the Gyroscope and 10µW to 1,5mW for the Accelerometer.

The sensors which report "Google Inc." seem to be virtual ones, which perform sensor-fusion to deliver values of higher accuracy and usability. See this Google Tech Talk.

Popularity: 5.0 Title: Optimising Firebase's power consumption on mobiles Id: 14731293, Count: 251 Tags: Answers: 1 AcceptedAnswer: 14734119 Created: 2013-02-06 14:20:02.0 Body:

For Firebase-based mobile applications in which latencies of ~1 minute (or manual sync) are acceptable, will power consumption be optimal? Is it possible and does it make sense to adjust keep-alives, etc?

Popularity: 1.0 Answer #14734119, count #1, created: 2013-02-06 16:35:27.0

Firebase is optimized for real-time communication (meaning as low latency as possible). While I have no reason to suspect it'll be a power hog, we haven't (yet) optimized for power consumption or done any in-depth testing.

Feel free to email support@firebase.com if you do any testing on your own and want to share your findings.

Title: CLLocationManager Precision and Power Consumption Id: 14744506, Count: 252 Tags: Answers: 1 AcceptedAnswer: 14744869 Created: 2013-02-07 05:50:05.0 Body:

I know that CLLocationManager provides us with multiple precision choice, so it does not constantly send you a bunch of notification. But does it saves power?

I'm looking to do a geofencing application that reminds me to get off the bus. LOL

Popularity: 3.0 Answer #14744869, count #1, created: 2013-02-07 06:17:59.0

The power consumption is dependent on the desiredAccuracy setting of the LocationManager. If you set it to kCLLocationAccuracyBestForNavigation then the power consumption is the highest.

In your scenario you could potentially use kCLLocationAccuracyKilometer as the bus stops should be at least a Kilometer apart.

Title: Energy Consumed by my code in Ubuntu Id: 14771654, Count: 253 Tags: Answers: 1 AcceptedAnswer: null Created: 2013-02-08 11:34:48.0 Body:

I'm working in Ubuntu with some kind of functions and I would like to know how energy they consume but I haven't found a good tool for this, or at least make it run. I have tried with:

  • Intel Energy Checker. But I can't make it run, I have some erros in the productivy_link.h and I haven't touch this file, only download it.
  • PowerTOP This tool works but with this you can only see the discharge rate of your battery and this is not very 'exact' due to some processes of the OS can be executed automatically.

Could someone help me? Thanks in advance.

Popularity: 2.0 Answer #14772063, count #1, created: 2013-02-08 11:58:15.0

Did you checkd pwrkap? You can install it with (and the gui too):

sudo apt-get install pwrkap pwrkap-gui 

You can also check this link: powerstat: Power Consumption Calculator for Ubuntu Linux

Title: Power save mode STM32F205RG Id: 14778298, Count: 254 Tags: Answers: 1 AcceptedAnswer: 14834668 Created: 2013-02-08 17:43:15.0 Body:

I am using STM32F205RGT6 Cortex-M3 microcontroller and coding with IAR Embedded Workbench.

I plan to keep the microcontroller in a power saving mode most of the time except when an external component tries to either communicate via SPI (the STM32 microcontroller is meant to be a SP slave) or via USB.

One external compinent is connected via SPI (PB12-15) and PC is connected via USB (PA11-12). The communication works fine - I have tested both the SPI as well as USB. I figured that once I am done setting up SPI and USB I will call a power saving function and add the same function call at the end of interrupt service routines. I have found PWR_EnterSTANDBYMode and PWR_EnterSTOPMode (in stm32f2xx_pwr.h) both of which I tried using. However with such arrangement I cannot establish any communication (SPI or USB) with the microcontroller.

Is there something extra that needs to be configured (for example which pins should wake the microcontroller up)? Am I using wrong function? Or wrong header file? Can you point me to an example resembling such case (I could not find anything similar on ST's website)?

Any constructive feedback would be welcome.

Popularity: 4.0 Answer #14834668, count #1, created: 2013-02-12 14:20:14.0

In the mean time I found the application note AN3430 (http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/APPLICATION_NOTE/DM00033348.pdf) which is somehow more digestible (only 38 pages) which gives an excellent overview on power saving in the microcontroller.

Since I don't have access to PA0-WKUP (the wake-up pin) I had to discard using stand-by. Seems that just a simple sleep mode in main loop - by calling __WMI(); - should lower current consumption enough in my case. I might consider stop mode if sleep mode isn't enough but I will have read fragments of datasheet on configuration of EXTI registers that the application notepoints to.

Title: Are there algorithms or some programming styles that fits a CISC or RISC better? Id: 14823824, Count: 255 Tags: Answers: 1 AcceptedAnswer: 14827441 Created: 2013-02-12 01:01:51.0 Body:

Seems both are used for various reasons, ARM for power consumption, x86 for its extended features.

I'm still curious, since my computer science culture is a little empty, what was the true purpose of CISC chips like x86 (or their predecessors).

Would our computer be better if they had RISC instead (for example if microsoft ported its kernel and toolchain for RISC like MIPS or ARM) ? Or would it be impossible as a task to accomplish ?

I'm reading the purpose of CISC was the ability to getting closer to higher level languages, which I find odd. Was there a deal between intel and microsoft to focus on x86 instead ?

Popularity: 6.0 Answer #14827441, count #1, created: 2013-02-12 07:22:03.0

blah blah blah blah blah rant blah blah blah blah

Title: Preventing Sleep on Surface Pro Id: 14901964, Count: 256 Tags: Answers: 3 AcceptedAnswer: null Created: 2013-02-15 19:29:34.0 Body:

I want to create an application that can run in the background, with the screen off, but prevent the Surface Pro from sleeping. Essentially, I have an audio book player, but as soon as I turn off the screen to listen to the audio book and attempt to save power, the Surface goes to sleep. In fact, the only way I know how to prevent sleep is to keep the display on, but I explicitly don't want to do that as I want to save power. Because of this, the DisplayRequest class is not a good solution for this problem.

Popularity: 15.0 Answer #14902559, count #1, created: 2013-02-15 20:09:54.0

Are you essentially trying to do background audio, then? To enable that, you need to make sure you register four event handlers with the Media Control object, and enable background audio in the manifest. I just answered a similar question here, and more info on background audio can be found in the docs (XAML | HTML/JS). I cover the topic for JS also in CHapter 10 of my book.

It's helpful to understand that background audio is a special class of background tasks that lets more of the device sleep than if you just kept the entire app running all the time. It will allow you to turn off the screen (directly event), and have the audio still respond to volume controls and play/pause controls (as on the Surface touch keyboard).

.Kraig

Answer #14943746, count #2, created: 2013-02-18 19:19:31.0

Yes this is possible. Say, you want to write a clock app, you would want to accomplish the same thing, right? This blog post walks you through the process: http://blogs.msdn.com/b/windowsappdev/archive/2012/05/16/being-productive-when-your-app-is-offscreen.aspx?wa=wsignin1.0 I wish I had written it.

Answer #15935552, count #3, created: 2013-04-10 20:15:09.0

That button doesn't actually turn the screen off. What it does is tells the system to sleep, which turns off the screen as part of its routine.

You can Google a program called 'nircmd.exe', and after downloading it and placing it in your Windows folder you can use the command 'C:\Windows\nircmd.exe monitor off' to make a shortcut on your desktop or such. It's a bit finicky to keep it off (closing the cover turns the screen back on, for some reason), but it works. You could also set your power options to turn the screen off after only a minute of idle.

I think Surface Pro has another issue where leaving it sit for a minute will automatically send it into sleep mode, but having media playing might make it work normally. You can also make sure your power options are set to the most-possible savings, and it should reach as little as 6 watts of drain (possibly six hours of such reading).

Title: Mono for Android performance comparing to Java Id: 14905560, Count: 257 Tags: Answers: null AcceptedAnswer: null Created: 2013-02-16 00:28:17.0 Body:

I already read some bunch of articles, blogs and stackoverflow question about it but I ask it again for I am confused about a thing.

MonoDroid was a project before Xamarin show up and of course before Xamarin makes the XoboxOS Research Project. So many of those blogs that say MonoDroid is slightly slower and eats more battery for it has two frameworks running and two garbage collectors may target MonoDroid and not Mono for Android.

Benchmarks show that XobotOS is much faster than Dalvik so my question is: Is apps written with Mono for Android still use both Dalvik VM and Mono VM? or they just run on the Mono VM which is faster than Dalvik VM? and which one eats more energy (Mono for Android or Java)?

I am currently working on a project which is about 50,000 line of code written in Java. I want to port it to iOS, Android, Windows 8 (Metro), Windows, WP, Mac OS X, Linux, etc. so it covers most popular operating systems of the world and for some of them I need to convert my code to another language. I first decided to convert to C#. Conversion is not a difficult thing for me for C# and Java are so similar but Instead I can use it everywhere. But I care about power consumption and performance so much. I do not care about file size that much though.

Thanks,

Popularity: 18.0 Title: How to walk the line between location accuracy and power efficiency? Id: 14930772, Count: 258 Tags: Answers: 1 AcceptedAnswer: 15079468 Created: 2013-02-18 06:39:51.0 Body:

I am working on an app that requires to work in foreground and background and send location data. I haven't written code yet and just familiarized myself with how CoreLocation works to determine which approach to follow. From the reading I did so far, I gather that:

  1. With startMonitoringSignificantLocationChanges

    • the GPS is never activated
    • it only calls didUpdateLocations (or didUpdateToLocation for ios<6) when a change of radio tower is detected
    • the desiredAccuracy and distanceFilter properties are both ignored
  2. With startUpdatingLocation

    • the battery drain is higher
    • didUpdateLocations is called whenever some hardware from the phone has some data to provide. And because there are multiple hardware components that can be used for location (GPS, radio, wifi), there is no guarantee when or how often didUpdateLocations is called, or whether a new reading will be more accurate than the previous one even
    • the first number(s) are usually bad because they do not rely on the GPS
    • there is no sure way of knowing whether we have the best location we will ever get: it is just a matter of picking among all locations received the one with the best accuracy in the given time window available

I don't see however much discussion or documentation about any intermediate route. What if I want to be power-conscious but get reasonably accurate data when a user has moved significantly? It seems that a possible compromise approach is to:

  • turn on startMonitoringSignificantLocationChanges
  • switch to using startUpdatingLocation when didUpdateLocations is called
  • wait for a short while to give a chance to the GPS to get some good readings and select the one with best accuracy
  • switch back to using startMonitoringSignificantLocationChanges and so on...

As far as I know, this approach will work the same in foreground and background and will provide some compromise between the two standard approaches supported by Apple.

Questions:
- Is my understanding correct and my "compromise" approach sound?
- Has anyone used that approach successfully?
- What are the caveats to look for with that approach?
- Is there a better compromise approach out there?

PS: I imagine that I could later refine the approach to take into account the estimated travel speed so that I don't constantly use the GPS when the person is traveling.

Popularity: 5.0 Answer #15079468, count #1, created: 2013-02-26 00:55:32.0

There are various ways to go about this problem, and it ranges from how complex you want this battery-conscious code to get. The simplest solution is obviously just always using the GPS, however you can get very complicated very fast, such as taking samples every x amount of time, find the most accurate, predict where the user will be in the time leading up to your next sample using the previous samples, etc.

  • The idea of a compromise is fine, however you need to factor in how much you want the users to use your app, how long it will be running for, and how accurate you need the data to be. In the end, it will come down to some trial and error in figuring out what the best combination of GPS and rough data is. You should know that the significant changes don't occur often, but obviously that's relative. If your app depends on users driving in a car at 60mph, then it might not take so long, but if its somebody walking around, then it can take much, much longer to trigger a significant change (if ever).
  • I personally have not used this approach before, but all apps that I've done with CoreLocation require very accurate location data for a short period of time.
  • The caveats to this approach is that it will take lots of trial and error, and may reduce performance of your app. Before you start coding this, you should figure out if your time making this work will pay off in the end. In addition, if you do decide to figure out where the user is traveling based on the samples, you will need to figure out if that actually saves battery - calculations like that could be pretty expensive battery-wise.
  • Honestly, CoreLocation isn't THAT big of a battery hog, and Apple is constantly improving it's energy use. For example, look at Moves for iOS. As a user of it, I can say that the battery effect is almost none, and it's always using my location 24/7.

If I'm not mistaken, Instruments allow you to monitor battery usage, so you can use that if you do decide to do a compromise to aid in your trial and error.

Hope this helped!

Title: Comparing Time by and Time complexity of Algorithms for Computer Arithmetic Id: 14947130, Count: 259 Tags: